This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdraveness: feat: use named array instead of array in normalizing score
ResultFAILURE
Tests 1 failed / 7 succeeded
Started2019-08-06 12:41
Elapsed22m13s
Revision
Buildergke-prow-ssd-pool-1a225945-jhsp
Refs master:16d9a659
80901:3b65d4f6
pod6c6efb97-b847-11e9-8b4a-32c6663b536a
infra-commit30315db4a
job-versionv1.16.0-alpha.2.348+1019916cd387ef
pod6c6efb97-b847-11e9-8b4a-32c6663b536a
repok8s.io/kubernetes
repo-commit1019916cd387ef6d9a7c6c8a240e8211abef2d47
repos{u'k8s.io/kubernetes': u'master:16d9a659da541fc38110c112c6ae20a1056baf37,80901:3b65d4f6bcf0a710623e63ae1b08ffcf37b41cda', u'k8s.io/perf-tests': u'master', u'k8s.io/release': u'master'}
revisionv1.16.0-alpha.2.348+1019916cd387ef

Test Failures


Up 29s

error during ./hack/e2e-internal/e2e-up.sh: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 7 Passed Tests

Error lines from build-log.txt

... skipping 265 lines ...
W0806 12:54:07.529] INFO: 5123 processes: 5123 processwrapper-sandbox.
W0806 12:54:07.534] INFO: Build completed successfully, 5223 total actions
W0806 12:54:07.538] INFO: Build completed successfully, 5223 total actions
W0806 12:54:07.546] 2019/08/06 12:54:07 process.go:155: Step 'make -C /go/src/k8s.io/kubernetes bazel-release' finished in 11m27.284233431s
W0806 12:54:07.547] 2019/08/06 12:54:07 util.go:255: Flushing memory.
I0806 12:54:07.648] make: Leaving directory '/go/src/k8s.io/kubernetes'
W0806 12:54:08.527] 2019/08/06 12:54:08 util.go:265: flushMem error (page cache): exit status 1
W0806 12:54:08.528] 2019/08/06 12:54:08 process.go:153: Running: /go/src/k8s.io/release/push-build.sh --nomock --verbose --noupdatelatest --bucket=kubernetes-release-pull --ci --gcs-suffix=/pull-kubernetes-e2e-gce-100-performance --allow-dup
I0806 12:54:08.634] push-build.sh: BEGIN main on 6c6efb97-b847-11e9-8b4a-32c6663b536a Tue Aug  6 12:54:08 UTC 2019
I0806 12:54:08.634] 
W0806 12:54:08.735] $TEST_TMPDIR defined: output root default is '/bazel-scratch/.cache/bazel' and max_idle_secs default is '15'.
W0806 12:54:09.502] Loading: 
W0806 12:54:09.511] Loading: 0 packages loaded
... skipping 726 lines ...
W0806 12:57:46.559] Zone: us-east1-b
I0806 12:57:49.868] Bringing down cluster
W0806 12:57:49.969] INSTANCE_GROUPS=e2e-80901-95a39-minion-group
W0806 12:57:49.970] NODE_NAMES=e2e-80901-95a39-minion-group-14qx e2e-80901-95a39-minion-group-21ls e2e-80901-95a39-minion-group-2xzz e2e-80901-95a39-minion-group-3480 e2e-80901-95a39-minion-group-5bmr e2e-80901-95a39-minion-group-5ln0 e2e-80901-95a39-minion-group-5qcg e2e-80901-95a39-minion-group-5t3k e2e-80901-95a39-minion-group-67sv e2e-80901-95a39-minion-group-68fk e2e-80901-95a39-minion-group-6bg3 e2e-80901-95a39-minion-group-8q4z e2e-80901-95a39-minion-group-9315 e2e-80901-95a39-minion-group-c75b e2e-80901-95a39-minion-group-cnpk e2e-80901-95a39-minion-group-db8q e2e-80901-95a39-minion-group-dvtn e2e-80901-95a39-minion-group-frq5 e2e-80901-95a39-minion-group-fz43 e2e-80901-95a39-minion-group-hlck e2e-80901-95a39-minion-group-hs6s e2e-80901-95a39-minion-group-j065 e2e-80901-95a39-minion-group-k6kg e2e-80901-95a39-minion-group-kr23 e2e-80901-95a39-minion-group-l30t e2e-80901-95a39-minion-group-l87q e2e-80901-95a39-minion-group-lp0j e2e-80901-95a39-minion-group-shrt e2e-80901-95a39-minion-group-t2gm e2e-80901-95a39-minion-group-tnpq e2e-80901-95a39-minion-group-v1tb e2e-80901-95a39-minion-group-vl1n e2e-80901-95a39-minion-group-wsrm e2e-80901-95a39-minion-group-xg3d e2e-80901-95a39-minion-group-xj1r e2e-80901-95a39-minion-group-zb52
W0806 12:57:52.925] Deleting Managed Instance Group...
W0806 12:57:53.394] done.
W0806 12:57:53.399] ERROR: (gcloud.compute.instance-groups.managed.delete) Some requests did not succeed:
W0806 12:57:53.399]  - The resource 'projects/k8s-presubmit-scale/zones/us-east1-b/instanceGroupManagers/e2e-80901-95a39-minion-group' is not ready
W0806 12:57:53.399] 
W0806 12:57:53.477] Failed to delete instance group(s).
W0806 12:57:55.696] ERROR: (gcloud.compute.instance-templates.delete) Could not fetch resource:
W0806 12:57:55.696]  - The instance_template resource 'projects/k8s-presubmit-scale/global/instanceTemplates/e2e-80901-95a39-minion-template' is already being used by 'projects/k8s-presubmit-scale/zones/us-east1-b/instanceGroupManagers/e2e-80901-95a39-minion-group'
W0806 12:57:55.696] 
W0806 12:58:05.845] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/instanceTemplates/e2e-80901-95a39-windows-node-template].
W0806 12:58:14.981] Warning: Permanently added 'compute.6067192042879240539' (ED25519) to the list of known hosts.
I0806 12:58:15.358] 
I0806 12:58:15.448] Removing etcd replica, name: e2e-80901-95a39-master, port: 2379, result: 0
I0806 12:58:17.071] {"message":"Internal Server Error"}Removing etcd replica, name: e2e-80901-95a39-master, port: 4002, result: 0
W0806 12:58:24.097] Updated [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-80901-95a39-master].
W0806 13:01:07.484] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-80901-95a39-master].
W0806 13:01:30.578] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-80901-95a39-master-https].
W0806 13:01:36.681] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-80901-95a39-master-etcd].
W0806 13:01:42.769] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-80901-95a39-minion-all].
W0806 13:01:51.450] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W0806 13:01:51.450]  - The resource 'projects/k8s-presubmit-scale/global/firewalls/e2e-80901-95a39-default-internal-node' is not ready
W0806 13:01:51.450] 
W0806 13:01:52.372] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W0806 13:01:52.372]  - The resource 'projects/k8s-presubmit-scale/global/firewalls/e2e-80901-95a39-default-ssh' is not ready
W0806 13:01:52.372] 
W0806 13:01:52.967] Failed to delete firewall rules.
I0806 13:01:53.875] Deleting firewall rules remaining in network e2e-80901-95a39: 
I0806 13:01:54.711] Deleting custom subnet...
W0806 13:01:55.775] ERROR: (gcloud.compute.networks.subnets.delete) Could not fetch resource:
W0806 13:01:55.775]  - The resource 'projects/k8s-presubmit-scale/regions/us-east1/subnetworks/e2e-80901-95a39-custom-subnet' is not ready
W0806 13:01:55.776] 
W0806 13:02:04.686] ERROR: (gcloud.compute.networks.delete) Could not fetch resource:
W0806 13:02:04.686]  - The network resource 'projects/k8s-presubmit-scale/global/networks/e2e-80901-95a39' is already being used by 'projects/k8s-presubmit-scale/regions/us-east1/subnetworks/e2e-80901-95a39-custom-subnet'
W0806 13:02:04.686] 
I0806 13:02:04.787] Failed to delete network 'e2e-80901-95a39'. Listing firewall-rules:
W0806 13:02:05.806] 
W0806 13:02:05.807] To show all fields of the firewall, please show in JSON format: --format=json
W0806 13:02:05.807] To show all fields in table format, please see the examples in --help.
W0806 13:02:05.808] 
W0806 13:02:06.039] W0806 13:02:06.039060   72012 loader.go:223] Config not found: /workspace/.kube/config
W0806 13:02:06.217] W0806 13:02:06.216603   72067 loader.go:223] Config not found: /workspace/.kube/config
... skipping 25 lines ...
W0806 13:02:07.569] Zone: us-east1-b
I0806 13:02:16.329] +++ Staging tars to Google Storage: gs://kubernetes-staging-141a37ea6d/e2e-80901-95a39-devel
I0806 13:02:28.429] +++ kubernetes-server-linux-amd64.tar.gz uploaded (sha1 = def791ddb36fe6f5b3af28e828659a6b7eb43cf8)
I0806 13:02:30.722] +++ kubernetes-manifests.tar.gz uploaded earlier, cloud and local file md5 match (md5 = 92c0174d5a70529198ff660789bc1d8c)
I0806 13:02:32.631] Found existing network e2e-80901-95a39 in CUSTOM mode.
W0806 13:02:34.002] Creating firewall...
W0806 13:02:34.289] failed.
W0806 13:02:34.293] ERROR: (gcloud.compute.firewall-rules.create) Could not fetch resource:
W0806 13:02:34.294]  - The resource 'projects/k8s-presubmit-scale/global/networks/e2e-80901-95a39' is not ready
W0806 13:02:34.294] 
W0806 13:02:34.780] Creating firewall...
I0806 13:02:34.969] IP aliases are enabled. Creating subnetworks.
W0806 13:02:35.101] failed.
W0806 13:02:35.104] ERROR: (gcloud.compute.firewall-rules.create) Could not fetch resource:
W0806 13:02:35.104]  - The resource 'projects/k8s-presubmit-scale/global/networks/e2e-80901-95a39' is not ready
W0806 13:02:35.104] 
W0806 13:02:35.502] Creating firewall...
I0806 13:02:35.740] Creating subnet e2e-80901-95a39:e2e-80901-95a39-custom-subnet
W0806 13:02:35.840] failed.
W0806 13:02:35.841] ERROR: (gcloud.compute.firewall-rules.create) Could not fetch resource:
W0806 13:02:35.841]  - The resource 'projects/k8s-presubmit-scale/global/networks/e2e-80901-95a39' is not ready
W0806 13:02:35.841] 
W0806 13:02:36.482] ERROR: (gcloud.compute.networks.subnets.create) Could not fetch resource:
W0806 13:02:36.483]  - Internal error. Please try again or contact Google Support. (Code: '6014257078259152838')
W0806 13:02:36.483] 
W0806 13:02:36.557] 2019/08/06 13:02:36 process.go:155: Step './hack/e2e-internal/e2e-up.sh' finished in 29.750766406s
W0806 13:02:36.557] 2019/08/06 13:02:36 e2e.go:519: Dumping logs from nodes to GCS directly at path: gs://kubernetes-jenkins/pr-logs/pull/80901/pull-kubernetes-e2e-gce-100-performance/1158719640130555904/artifacts
W0806 13:02:36.558] 2019/08/06 13:02:36 process.go:153: Running: ./cluster/log-dump/log-dump.sh /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/80901/pull-kubernetes-e2e-gce-100-performance/1158719640130555904/artifacts
W0806 13:02:36.637] Trying to find master named 'e2e-80901-95a39-master'
W0806 13:02:36.637] Looking for address 'e2e-80901-95a39-master-ip'
I0806 13:02:36.738] Checking for custom logdump instances, if any
I0806 13:02:36.739] Sourcing kube-util.sh
I0806 13:02:36.739] Detecting project
I0806 13:02:36.739] Project: k8s-presubmit-scale
I0806 13:02:36.740] Network Project: k8s-presubmit-scale
I0806 13:02:36.740] Zone: us-east1-b
I0806 13:02:36.740] Dumping logs from master locally to '/workspace/_artifacts'
W0806 13:02:37.418] ERROR: (gcloud.compute.addresses.describe) Could not fetch resource:
W0806 13:02:37.418]  - The resource 'projects/k8s-presubmit-scale/regions/us-east1/addresses/e2e-80901-95a39-master-ip' was not found
W0806 13:02:37.418] 
W0806 13:02:37.486] Could not detect Kubernetes master node.  Make sure you've launched a cluster with 'kube-up.sh'
I0806 13:02:37.586] Master not detected. Is the cluster up?
I0806 13:02:37.587] Dumping logs from nodes to GCS directly at 'gs://kubernetes-jenkins/pr-logs/pull/80901/pull-kubernetes-e2e-gce-100-performance/1158719640130555904/artifacts' using logexporter
I0806 13:02:37.587] Detecting nodes in the cluster
... skipping 32 lines ...
I0806 13:03:18.367] Cleared config for k8s-presubmit-scale_e2e-80901-95a39 from /workspace/.kube/config
I0806 13:03:18.368] Done
W0806 13:03:18.384] W0806 13:03:18.363151   75169 loader.go:223] Config not found: /workspace/.kube/config
W0806 13:03:18.384] W0806 13:03:18.363424   75169 loader.go:223] Config not found: /workspace/.kube/config
W0806 13:03:18.385] 2019/08/06 13:03:18 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 35.785175406s
W0806 13:03:18.385] 2019/08/06 13:03:18 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0806 13:03:18.385] 2019/08/06 13:03:18 main.go:316: Something went wrong: starting e2e cluster: error during ./hack/e2e-internal/e2e-up.sh: exit status 1
W0806 13:03:18.385] Traceback (most recent call last):
W0806 13:03:18.385]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 778, in <module>
W0806 13:03:18.385]     main(parse_args())
W0806 13:03:18.385]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 626, in main
W0806 13:03:18.386]     mode.start(runner_args)
W0806 13:03:18.386]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0806 13:03:18.386]     check_env(env, self.command, *args)
W0806 13:03:18.386]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0806 13:03:18.386]     subprocess.check_call(cmd, env=env)
W0806 13:03:18.387]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0806 13:03:18.387]     raise CalledProcessError(retcode, cmd)
W0806 13:03:18.388] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--build=bazel', '--stage=gs://kubernetes-release-pull/ci/pull-kubernetes-e2e-gce-100-performance', '--up', '--down', '--provider=gce', '--cluster=e2e-80901-95a39', '--gcp-network=e2e-80901-95a39', '--extract=local', '--gcp-nodes=100', '--gcp-project=k8s-presubmit-scale', '--gcp-zone=us-east1-b', '--test-cmd=/go/src/k8s.io/perf-tests/run-e2e.sh', '--test-cmd-args=cluster-loader2', '--test-cmd-args=--nodes=100', '--test-cmd-args=--provider=gce', '--test-cmd-args=--report-dir=/workspace/_artifacts', '--test-cmd-args=--testconfig=testing/density/config.yaml', '--test-cmd-args=--testconfig=testing/load/config.yaml', '--test-cmd-args=--testoverrides=./testing/density/100_nodes/override.yaml', '--test-cmd-args=--testoverrides=./testing/load/gce/throughput_override.yaml', '--test-cmd-args=--testoverrides=./testing/prometheus/scrape-etcd.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/probes.yaml', '--test-cmd-name=ClusterLoaderV2', '--timeout=100m', '--logexporter-gcs-path=gs://kubernetes-jenkins/pr-logs/pull/80901/pull-kubernetes-e2e-gce-100-performance/1158719640130555904/artifacts')' returned non-zero exit status 1
E0806 13:03:18.388] Command failed
I0806 13:03:18.388] process 523 exited with code 1 after 20.7m
E0806 13:03:18.389] FAIL: pull-kubernetes-e2e-gce-100-performance
I0806 13:03:18.389] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0806 13:03:18.900] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0806 13:03:18.950] process 75180 exited with code 0 after 0.0m
I0806 13:03:18.950] Call:  gcloud config get-value account
I0806 13:03:19.253] process 75192 exited with code 0 after 0.0m
I0806 13:03:19.254] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0806 13:03:19.254] Upload result and artifacts...
I0806 13:03:19.254] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/80901/pull-kubernetes-e2e-gce-100-performance/1158719640130555904
I0806 13:03:19.254] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/80901/pull-kubernetes-e2e-gce-100-performance/1158719640130555904/artifacts
W0806 13:03:20.296] CommandException: One or more URLs matched no objects.
E0806 13:03:20.418] Command failed
I0806 13:03:20.418] process 75204 exited with code 1 after 0.0m
W0806 13:03:20.419] Remote dir gs://kubernetes-jenkins/pr-logs/pull/80901/pull-kubernetes-e2e-gce-100-performance/1158719640130555904/artifacts not exist yet
I0806 13:03:20.419] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/80901/pull-kubernetes-e2e-gce-100-performance/1158719640130555904/artifacts
I0806 13:03:22.135] process 75346 exited with code 0 after 0.0m
I0806 13:03:22.136] Call:  git rev-parse HEAD
I0806 13:03:22.141] process 75870 exited with code 0 after 0.0m
... skipping 21 lines ...