This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdraveness: feat: use named array instead of array in normalizing score
ResultFAILURE
Tests 1 failed / 7 succeeded
Started2019-08-06 12:38
Elapsed25m28s
Revision
Buildergke-prow-ssd-pool-1a225945-xlz0
Refs master:16d9a659
80901:42856286
podfa36aeac-b846-11e9-8b8a-56bb4ed8de72
infra-commit30315db4a
job-versionv1.16.0-alpha.2.348+1c1cbf1fa1c102
podfa36aeac-b846-11e9-8b8a-56bb4ed8de72
repok8s.io/kubernetes
repo-commit1c1cbf1fa1c10224ca5c7171662e543a444b73a6
repos{u'k8s.io/kubernetes': u'master:16d9a659da541fc38110c112c6ae20a1056baf37,80901:42856286f265bf9b92647bfc477b73e0ee8c6d01', u'k8s.io/perf-tests': u'master', u'k8s.io/release': u'master'}
revisionv1.16.0-alpha.2.348+1c1cbf1fa1c102

Test Failures


Up 33s

error during ./hack/e2e-internal/e2e-up.sh: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 7 Passed Tests

Error lines from build-log.txt

... skipping 268 lines ...
W0806 12:49:45.212] INFO: 5123 processes: 5123 processwrapper-sandbox.
W0806 12:49:45.233] INFO: Build completed successfully, 5223 total actions
W0806 12:49:45.248] INFO: Build completed successfully, 5223 total actions
W0806 12:49:45.268] 2019/08/06 12:49:45 process.go:155: Step 'make -C /go/src/k8s.io/kubernetes bazel-release' finished in 10m13.721714976s
W0806 12:49:45.269] 2019/08/06 12:49:45 util.go:255: Flushing memory.
I0806 12:49:45.372] make: Leaving directory '/go/src/k8s.io/kubernetes'
W0806 12:49:52.732] 2019/08/06 12:49:52 util.go:265: flushMem error (page cache): exit status 1
W0806 12:49:52.733] 2019/08/06 12:49:52 process.go:153: Running: /go/src/k8s.io/release/push-build.sh --nomock --verbose --noupdatelatest --bucket=kubernetes-release-pull --ci --gcs-suffix=/pull-kubernetes-e2e-gce-100-performance --allow-dup
W0806 12:49:52.829] $TEST_TMPDIR defined: output root default is '/bazel-scratch/.cache/bazel' and max_idle_secs default is '15'.
I0806 12:49:52.930] push-build.sh: BEGIN main on fa36aeac-b846-11e9-8b8a-56bb4ed8de72 Tue Aug  6 12:49:52 UTC 2019
I0806 12:49:52.930] 
W0806 12:49:53.650] Loading: 
W0806 12:49:53.650] Loading: 0 packages loaded
... skipping 730 lines ...
W0806 12:55:57.843] NODE_NAMES=e2e-80901-95a39-minion-group-05th e2e-80901-95a39-minion-group-14qx e2e-80901-95a39-minion-group-1f6j e2e-80901-95a39-minion-group-1qxj e2e-80901-95a39-minion-group-1rv3 e2e-80901-95a39-minion-group-1sjs e2e-80901-95a39-minion-group-21ls e2e-80901-95a39-minion-group-2q2b e2e-80901-95a39-minion-group-2xzz e2e-80901-95a39-minion-group-3480 e2e-80901-95a39-minion-group-3gvn e2e-80901-95a39-minion-group-4bnk e2e-80901-95a39-minion-group-4l60 e2e-80901-95a39-minion-group-4w9z e2e-80901-95a39-minion-group-5bmr e2e-80901-95a39-minion-group-5ln0 e2e-80901-95a39-minion-group-5qcg e2e-80901-95a39-minion-group-5t3k e2e-80901-95a39-minion-group-5whf e2e-80901-95a39-minion-group-604k e2e-80901-95a39-minion-group-60mx e2e-80901-95a39-minion-group-612l e2e-80901-95a39-minion-group-6568 e2e-80901-95a39-minion-group-67sv e2e-80901-95a39-minion-group-68fk e2e-80901-95a39-minion-group-69w1 e2e-80901-95a39-minion-group-6bg3 e2e-80901-95a39-minion-group-7qw2 e2e-80901-95a39-minion-group-7s73 e2e-80901-95a39-minion-group-8gkh e2e-80901-95a39-minion-group-8l8j e2e-80901-95a39-minion-group-8q4z e2e-80901-95a39-minion-group-8qmm e2e-80901-95a39-minion-group-9315 e2e-80901-95a39-minion-group-9xvd e2e-80901-95a39-minion-group-9zc1 e2e-80901-95a39-minion-group-b9k9 e2e-80901-95a39-minion-group-bcg1 e2e-80901-95a39-minion-group-bw8b e2e-80901-95a39-minion-group-c4m6 e2e-80901-95a39-minion-group-c75b e2e-80901-95a39-minion-group-c7hx e2e-80901-95a39-minion-group-cdc6 e2e-80901-95a39-minion-group-cnpk e2e-80901-95a39-minion-group-d0fx e2e-80901-95a39-minion-group-db8q e2e-80901-95a39-minion-group-dvtn e2e-80901-95a39-minion-group-f2w6 e2e-80901-95a39-minion-group-fmv5 e2e-80901-95a39-minion-group-frq5 e2e-80901-95a39-minion-group-fz43 e2e-80901-95a39-minion-group-gc9x e2e-80901-95a39-minion-group-gwjt e2e-80901-95a39-minion-group-h0f5 e2e-80901-95a39-minion-group-h466 e2e-80901-95a39-minion-group-h5zj e2e-80901-95a39-minion-group-h73s e2e-80901-95a39-minion-group-h7pw e2e-80901-95a39-minion-group-hfkh e2e-80901-95a39-minion-group-hlck e2e-80901-95a39-minion-group-hs6s e2e-80901-95a39-minion-group-j065 e2e-80901-95a39-minion-group-k6kg e2e-80901-95a39-minion-group-kr23 e2e-80901-95a39-minion-group-kwr8 e2e-80901-95a39-minion-group-l1fr e2e-80901-95a39-minion-group-l30t e2e-80901-95a39-minion-group-l87q e2e-80901-95a39-minion-group-lp0j e2e-80901-95a39-minion-group-mb8r e2e-80901-95a39-minion-group-mks2 e2e-80901-95a39-minion-group-ntwb e2e-80901-95a39-minion-group-nvdd e2e-80901-95a39-minion-group-pkvp e2e-80901-95a39-minion-group-pwmb e2e-80901-95a39-minion-group-rbqf e2e-80901-95a39-minion-group-rkqn e2e-80901-95a39-minion-group-rsvr e2e-80901-95a39-minion-group-shrt e2e-80901-95a39-minion-group-sj3v e2e-80901-95a39-minion-group-t2gm e2e-80901-95a39-minion-group-tdr1 e2e-80901-95a39-minion-group-tnpq e2e-80901-95a39-minion-group-v1tb e2e-80901-95a39-minion-group-vh13 e2e-80901-95a39-minion-group-vh59 e2e-80901-95a39-minion-group-vl1n e2e-80901-95a39-minion-group-wfpf e2e-80901-95a39-minion-group-wsrm e2e-80901-95a39-minion-group-x4jc e2e-80901-95a39-minion-group-x556 e2e-80901-95a39-minion-group-x5rg e2e-80901-95a39-minion-group-x7hm e2e-80901-95a39-minion-group-xg3d e2e-80901-95a39-minion-group-xj1r e2e-80901-95a39-minion-group-z5bg e2e-80901-95a39-minion-group-z5zr e2e-80901-95a39-minion-group-zb52 e2e-80901-95a39-minion-group-zm16 e2e-80901-95a39-minion-group-zq56
W0806 12:56:00.944] Deleting Managed Instance Group...
W0806 12:59:15.443] ........................................Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instanceGroupManagers/e2e-80901-95a39-minion-group].
W0806 12:59:15.444] done.
W0806 12:59:24.816] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/instanceTemplates/e2e-80901-95a39-minion-template].
W0806 12:59:34.376] ssh: connect to host 35.229.86.45 port 22: Connection refused
W0806 12:59:34.382] ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
I0806 12:59:34.502] Removing etcd replica, name: e2e-80901-95a39-master, port: 2379, result: 255
W0806 12:59:35.697] ssh: connect to host 35.229.86.45 port 22: Connection refused
W0806 12:59:35.703] ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
I0806 12:59:35.808] Removing etcd replica, name: e2e-80901-95a39-master, port: 4002, result: 255
W0806 13:01:08.994] ERROR: (gcloud.compute.instances.delete) Could not fetch resource:
W0806 13:01:08.995]  - The resource 'projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-80901-95a39-master' was not found
W0806 13:01:08.995] 
W0806 13:01:16.045] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W0806 13:01:16.046]  - The resource 'projects/k8s-presubmit-scale/global/firewalls/e2e-80901-95a39-master-https' is not ready
W0806 13:01:16.046] 
W0806 13:01:16.965] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W0806 13:01:16.966]  - The resource 'projects/k8s-presubmit-scale/global/firewalls/e2e-80901-95a39-master-etcd' is not ready
W0806 13:01:16.967] 
W0806 13:01:17.970] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W0806 13:01:17.970]  - The resource 'projects/k8s-presubmit-scale/global/firewalls/e2e-80901-95a39-minion-all' is not ready
W0806 13:01:17.970] 
W0806 13:01:18.053] Failed to delete firewall rules.
W0806 13:01:26.652] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/regions/us-east1/addresses/e2e-80901-95a39-master-ip].
W0806 13:01:55.761] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-80901-95a39-default-internal-master].
W0806 13:02:01.921] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-80901-95a39-default-internal-node].
W0806 13:02:02.590] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-80901-95a39-default-ssh].
I0806 13:02:03.762] Deleting firewall rules remaining in network e2e-80901-95a39: 
I0806 13:02:04.732] Deleting custom subnet...
W0806 13:02:05.692] ERROR: (gcloud.compute.networks.subnets.delete) Could not fetch resource:
W0806 13:02:05.693]  - The resource 'projects/k8s-presubmit-scale/regions/us-east1/subnetworks/e2e-80901-95a39-custom-subnet' is not ready
W0806 13:02:05.693] 
W0806 13:02:14.629] ERROR: (gcloud.compute.networks.delete) Could not fetch resource:
W0806 13:02:14.630]  - The network resource 'projects/k8s-presubmit-scale/global/networks/e2e-80901-95a39' is already being used by 'projects/k8s-presubmit-scale/regions/us-east1/subnetworks/e2e-80901-95a39-custom-subnet'
W0806 13:02:14.630] 
I0806 13:02:14.731] Failed to delete network 'e2e-80901-95a39'. Listing firewall-rules:
W0806 13:02:15.652] 
W0806 13:02:15.652] To show all fields of the firewall, please show in JSON format: --format=json
W0806 13:02:15.652] To show all fields in table format, please see the examples in --help.
W0806 13:02:15.652] 
W0806 13:02:15.931] W0806 13:02:15.930891   72255 loader.go:223] Config not found: /workspace/.kube/config
W0806 13:02:16.086] W0806 13:02:16.085813   72307 loader.go:223] Config not found: /workspace/.kube/config
... skipping 25 lines ...
W0806 13:02:17.333] Zone: us-east1-b
I0806 13:02:26.089] +++ Staging tars to Google Storage: gs://kubernetes-staging-141a37ea6d/e2e-80901-95a39-devel
I0806 13:02:40.903] +++ kubernetes-server-linux-amd64.tar.gz uploaded (sha1 = dbe3cf5cd3a1e9bed177ecd43ad1e71581249f50)
I0806 13:02:43.274] +++ kubernetes-manifests.tar.gz uploaded earlier, cloud and local file md5 match (md5 = 92c0174d5a70529198ff660789bc1d8c)
I0806 13:02:45.068] Found existing network e2e-80901-95a39 in CUSTOM mode.
W0806 13:02:46.737] Creating firewall...
W0806 13:02:47.000] failed.
W0806 13:02:47.004] ERROR: (gcloud.compute.firewall-rules.create) Could not fetch resource:
W0806 13:02:47.005]  - The resource 'projects/k8s-presubmit-scale/global/networks/e2e-80901-95a39' is not ready
W0806 13:02:47.005] 
W0806 13:02:47.672] Creating firewall...
I0806 13:02:47.877] IP aliases are enabled. Creating subnetworks.
W0806 13:02:48.526] Creating firewall...
W0806 13:02:48.702] failed.
W0806 13:02:48.702] ERROR: (gcloud.compute.firewall-rules.create) Could not fetch resource:
W0806 13:02:48.703]  - The resource 'projects/k8s-presubmit-scale/global/networks/e2e-80901-95a39' was not found
W0806 13:02:48.703] 
W0806 13:02:48.808] failed.
W0806 13:02:48.810] ERROR: (gcloud.compute.firewall-rules.create) Could not fetch resource:
W0806 13:02:48.810]  - The resource 'projects/k8s-presubmit-scale/global/networks/e2e-80901-95a39' was not found
W0806 13:02:48.811] 
I0806 13:02:48.911] Creating subnet e2e-80901-95a39:e2e-80901-95a39-custom-subnet
W0806 13:02:49.639] ERROR: (gcloud.compute.networks.subnets.create) Could not fetch resource:
W0806 13:02:49.639]  - The resource 'projects/k8s-presubmit-scale/global/networks/e2e-80901-95a39' was not found
W0806 13:02:49.639] 
W0806 13:02:49.730] 2019/08/06 13:02:49 process.go:155: Step './hack/e2e-internal/e2e-up.sh' finished in 33.119086384s
W0806 13:02:49.731] 2019/08/06 13:02:49 e2e.go:519: Dumping logs from nodes to GCS directly at path: gs://kubernetes-jenkins/pr-logs/pull/80901/pull-kubernetes-e2e-gce-100-performance/1158718886980358144/artifacts
W0806 13:02:49.731] 2019/08/06 13:02:49 process.go:153: Running: ./cluster/log-dump/log-dump.sh /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/80901/pull-kubernetes-e2e-gce-100-performance/1158718886980358144/artifacts
W0806 13:02:49.822] Trying to find master named 'e2e-80901-95a39-master'
... skipping 2 lines ...
I0806 13:02:49.923] Sourcing kube-util.sh
I0806 13:02:49.923] Detecting project
I0806 13:02:49.923] Project: k8s-presubmit-scale
I0806 13:02:49.923] Network Project: k8s-presubmit-scale
I0806 13:02:49.924] Zone: us-east1-b
I0806 13:02:49.924] Dumping logs from master locally to '/workspace/_artifacts'
W0806 13:02:50.617] ERROR: (gcloud.compute.addresses.describe) Could not fetch resource:
W0806 13:02:50.617]  - The resource 'projects/k8s-presubmit-scale/regions/us-east1/addresses/e2e-80901-95a39-master-ip' was not found
W0806 13:02:50.617] 
W0806 13:02:50.702] Could not detect Kubernetes master node.  Make sure you've launched a cluster with 'kube-up.sh'
I0806 13:02:50.804] Master not detected. Is the cluster up?
I0806 13:02:50.805] Dumping logs from nodes to GCS directly at 'gs://kubernetes-jenkins/pr-logs/pull/80901/pull-kubernetes-e2e-gce-100-performance/1158718886980358144/artifacts' using logexporter
I0806 13:02:50.805] Detecting nodes in the cluster
... skipping 32 lines ...
I0806 13:03:32.608] Cleared config for k8s-presubmit-scale_e2e-80901-95a39 from /workspace/.kube/config
I0806 13:03:32.609] Done
W0806 13:03:32.624] W0806 13:03:32.602760   75397 loader.go:223] Config not found: /workspace/.kube/config
W0806 13:03:32.624] W0806 13:03:32.602953   75397 loader.go:223] Config not found: /workspace/.kube/config
W0806 13:03:32.624] 2019/08/06 13:03:32 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 36.705518331s
W0806 13:03:32.624] 2019/08/06 13:03:32 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0806 13:03:32.625] 2019/08/06 13:03:32 main.go:316: Something went wrong: starting e2e cluster: error during ./hack/e2e-internal/e2e-up.sh: exit status 1
W0806 13:03:32.625] Traceback (most recent call last):
W0806 13:03:32.625]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 778, in <module>
W0806 13:03:32.625]     main(parse_args())
W0806 13:03:32.626]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 626, in main
W0806 13:03:32.626]     mode.start(runner_args)
W0806 13:03:32.626]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0806 13:03:32.627]     check_env(env, self.command, *args)
W0806 13:03:32.627]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0806 13:03:32.627]     subprocess.check_call(cmd, env=env)
W0806 13:03:32.627]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0806 13:03:32.628]     raise CalledProcessError(retcode, cmd)
W0806 13:03:32.629] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--build=bazel', '--stage=gs://kubernetes-release-pull/ci/pull-kubernetes-e2e-gce-100-performance', '--up', '--down', '--provider=gce', '--cluster=e2e-80901-95a39', '--gcp-network=e2e-80901-95a39', '--extract=local', '--gcp-nodes=100', '--gcp-project=k8s-presubmit-scale', '--gcp-zone=us-east1-b', '--test-cmd=/go/src/k8s.io/perf-tests/run-e2e.sh', '--test-cmd-args=cluster-loader2', '--test-cmd-args=--nodes=100', '--test-cmd-args=--provider=gce', '--test-cmd-args=--report-dir=/workspace/_artifacts', '--test-cmd-args=--testconfig=testing/density/config.yaml', '--test-cmd-args=--testconfig=testing/load/config.yaml', '--test-cmd-args=--testoverrides=./testing/density/100_nodes/override.yaml', '--test-cmd-args=--testoverrides=./testing/load/gce/throughput_override.yaml', '--test-cmd-args=--testoverrides=./testing/prometheus/scrape-etcd.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/probes.yaml', '--test-cmd-name=ClusterLoaderV2', '--timeout=100m', '--logexporter-gcs-path=gs://kubernetes-jenkins/pr-logs/pull/80901/pull-kubernetes-e2e-gce-100-performance/1158718886980358144/artifacts')' returned non-zero exit status 1
E0806 13:03:32.629] Command failed
I0806 13:03:32.629] process 523 exited with code 1 after 24.1m
E0806 13:03:32.630] FAIL: pull-kubernetes-e2e-gce-100-performance
I0806 13:03:32.630] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0806 13:03:33.153] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0806 13:03:33.210] process 75409 exited with code 0 after 0.0m
I0806 13:03:33.211] Call:  gcloud config get-value account
I0806 13:03:33.559] process 75421 exited with code 0 after 0.0m
I0806 13:03:33.560] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0806 13:03:33.560] Upload result and artifacts...
I0806 13:03:33.560] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/80901/pull-kubernetes-e2e-gce-100-performance/1158718886980358144
I0806 13:03:33.561] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/80901/pull-kubernetes-e2e-gce-100-performance/1158718886980358144/artifacts
W0806 13:03:34.698] CommandException: One or more URLs matched no objects.
E0806 13:03:34.828] Command failed
I0806 13:03:34.829] process 75433 exited with code 1 after 0.0m
W0806 13:03:34.829] Remote dir gs://kubernetes-jenkins/pr-logs/pull/80901/pull-kubernetes-e2e-gce-100-performance/1158718886980358144/artifacts not exist yet
I0806 13:03:34.829] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/80901/pull-kubernetes-e2e-gce-100-performance/1158718886980358144/artifacts
I0806 13:03:36.747] process 75575 exited with code 0 after 0.0m
I0806 13:03:36.748] Call:  git rev-parse HEAD
I0806 13:03:36.752] process 76099 exited with code 0 after 0.0m
... skipping 21 lines ...