This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 17 succeeded
Started2020-01-03 06:43
Elapsed45m23s
Revision
Buildergke-prow-ssd-pool-1a225945-g9xl
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/35344b9a-5354-46dd-ac5b-d614a1428593/targets/test'}}
pod37221fb5-2df4-11ea-a07b-c6eb1bf16817
resultstorehttps://source.cloud.google.com/results/invocations/35344b9a-5354-46dd-ac5b-d614a1428593/targets/test
infra-commitd58556ac6
job-versionv1.18.0-alpha.1.303+47d5c3ef8df2b1
pod37221fb5-2df4-11ea-a07b-c6eb1bf16817
repok8s.io/kubernetes
repo-commit47d5c3ef8df2b1b26da739aec0ada15d41f20cf3
repos{u'k8s.io/kubernetes': u'master', u'k8s.io/perf-tests': u'master'}
revisionv1.18.0-alpha.1.303+47d5c3ef8df2b1

Test Failures


ClusterLoaderV2 15m40s

error during /go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1212987706812928000 --nodes=5000 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml --testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml --testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml --testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml --testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml --testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 17 Passed Tests

Error lines from build-log.txt

... skipping 427 lines ...
W0103 06:50:10.432] Trying to find master named 'kubemark-5000-master'
W0103 06:50:10.432] Looking for address 'kubemark-5000-master-ip'
W0103 06:50:11.481] Looking for address 'kubemark-5000-master-internal-ip'
I0103 06:50:12.470] Waiting up to 300 seconds for cluster initialization.
I0103 06:50:12.470] 
I0103 06:50:12.470]   This will continually check to see if the API for kubernetes is reachable.
I0103 06:50:12.470]   This may time out if there was some uncaught error during start up.
I0103 06:50:12.470] 
W0103 06:50:12.571] Using master: kubemark-5000-master (external IP: 35.237.157.213; internal IP: 10.40.0.2)
I0103 06:50:12.674] Kubernetes cluster created.
I0103 06:50:12.818] Cluster "kubemark-scalability-testing_kubemark-5000" set.
I0103 06:50:12.999] User "kubemark-scalability-testing_kubemark-5000" set.
I0103 06:50:13.194] Context "kubemark-scalability-testing_kubemark-5000" created.
... skipping 100 lines ...
I0103 06:50:54.656] kubemark-5000-minion-group-x7q9   Ready                      <none>   13s   v1.18.0-alpha.1.303+47d5c3ef8df2b1
I0103 06:50:54.656] kubemark-5000-minion-group-xbgn   Ready                      <none>   6s    v1.18.0-alpha.1.303+47d5c3ef8df2b1
I0103 06:50:54.657] kubemark-5000-minion-group-xcxg   Ready                      <none>   10s   v1.18.0-alpha.1.303+47d5c3ef8df2b1
I0103 06:50:54.657] kubemark-5000-minion-group-zt79   Ready                      <none>   10s   v1.18.0-alpha.1.303+47d5c3ef8df2b1
I0103 06:50:54.657] kubemark-5000-minion-heapster     Ready                      <none>   25s   v1.18.0-alpha.1.303+47d5c3ef8df2b1
I0103 06:50:55.116] Validate output:
I0103 06:50:55.548] NAME                 STATUS    MESSAGE             ERROR
I0103 06:50:55.549] scheduler            Healthy   ok                  
I0103 06:50:55.550] controller-manager   Healthy   ok                  
I0103 06:50:55.550] etcd-1               Healthy   {"health":"true"}   
I0103 06:50:55.550] etcd-0               Healthy   {"health":"true"}   
I0103 06:50:55.559] Cluster validation succeeded
W0103 06:50:55.659] Done, listing cluster services:
... skipping 219 lines ...
W0103 06:53:37.133] Trying to find master named 'kubemark-5000-kubemark-master'
W0103 06:53:37.134] Looking for address 'kubemark-5000-kubemark-master-ip'
W0103 06:53:38.197] Looking for address 'kubemark-5000-kubemark-master-internal-ip'
I0103 06:53:39.336] Waiting up to 300 seconds for cluster initialization.
I0103 06:53:39.337] 
I0103 06:53:39.337]   This will continually check to see if the API for kubernetes is reachable.
I0103 06:53:39.337]   This may time out if there was some uncaught error during start up.
I0103 06:53:39.337] 
I0103 06:54:07.127] ...........Kubernetes cluster created.
W0103 06:54:07.228] Using master: kubemark-5000-kubemark-master (external IP: 35.231.150.150; internal IP: 10.40.3.216)
I0103 06:54:07.333] Cluster "kubemark-scalability-testing_kubemark-5000-kubemark" set.
I0103 06:54:07.523] User "kubemark-scalability-testing_kubemark-5000-kubemark" set.
I0103 06:54:07.733] Context "kubemark-scalability-testing_kubemark-5000-kubemark" created.
... skipping 19 lines ...
I0103 06:54:33.644] Found 0 Nodes, allowing additional 2 iterations for other Nodes to join.
I0103 06:54:33.645] Waiting for 1 ready nodes. 0 ready nodes, 1 registered. Retrying.
I0103 06:54:49.001] Found 1 node(s).
I0103 06:54:49.347] NAME                            STATUS                     ROLES    AGE   VERSION
I0103 06:54:49.348] kubemark-5000-kubemark-master   Ready,SchedulingDisabled   <none>   21s   v1.18.0-alpha.1.303+47d5c3ef8df2b1
I0103 06:54:49.748] Validate output:
I0103 06:54:50.088] NAME                 STATUS    MESSAGE             ERROR
I0103 06:54:50.089] controller-manager   Healthy   ok                  
I0103 06:54:50.089] scheduler            Healthy   ok                  
I0103 06:54:50.089] etcd-1               Healthy   {"health":"true"}   
I0103 06:54:50.089] etcd-0               Healthy   {"health":"true"}   
I0103 06:54:50.105] Cluster validation succeeded
W0103 06:54:50.206] Done, listing cluster services:
... skipping 5148 lines ...
W0103 07:01:01.633] I0103 07:01:01.633309   28692 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/prometheus-serviceMonitor.yaml
W0103 07:01:01.672] I0103 07:01:01.672605   28692 prometheus.go:201] Exposing kube-apiserver metrics in kubemark cluster
W0103 07:01:01.824] I0103 07:01:01.823830   28692 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-endpoints.yaml
W0103 07:01:01.862] I0103 07:01:01.862418   28692 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-service.yaml
W0103 07:01:01.903] I0103 07:01:01.903021   28692 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-serviceMonitor.yaml
W0103 07:01:01.942] I0103 07:01:01.942164   28692 prometheus.go:277] Waiting for Prometheus stack to become healthy...
W0103 07:01:31.981] W0103 07:01:31.980936   28692 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 07:02:01.982] W0103 07:02:01.981935   28692 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 07:02:31.981] W0103 07:02:31.981627   28692 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 07:03:01.982] W0103 07:03:01.981810   28692 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 07:03:31.982] W0103 07:03:31.981923   28692 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 07:04:01.982] W0103 07:04:01.981644   28692 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 07:04:31.981] W0103 07:04:31.981043   28692 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 07:05:01.982] W0103 07:05:01.982445   28692 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 07:05:31.981] W0103 07:05:31.980896   28692 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 07:06:01.981] W0103 07:06:01.981581   28692 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 07:06:31.981] W0103 07:06:31.981570   28692 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 07:07:01.981] W0103 07:07:01.981053   28692 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 07:07:31.981] W0103 07:07:31.981541   28692 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 07:08:01.981] W0103 07:08:01.981132   28692 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 07:08:31.982] W0103 07:08:31.982542   28692 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 07:09:01.981] W0103 07:09:01.981396   28692 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 07:09:31.983] W0103 07:09:31.983337   28692 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 07:10:01.981] W0103 07:10:01.981081   28692 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 07:10:31.981] W0103 07:10:31.981297   28692 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 07:11:01.981] W0103 07:11:01.981138   28692 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 07:11:31.981] W0103 07:11:31.980955   28692 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 07:12:01.985] W0103 07:12:01.985432   28692 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 07:12:31.981] W0103 07:12:31.981621   28692 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 07:13:01.981] W0103 07:13:01.980848   28692 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 07:13:31.982] W0103 07:13:31.982481   28692 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 07:14:01.981] W0103 07:14:01.980900   28692 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 07:14:31.981] W0103 07:14:31.980943   28692 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 07:15:01.981] W0103 07:15:01.981065   28692 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 07:15:31.981] W0103 07:15:31.981076   28692 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 07:16:01.981] W0103 07:16:01.981027   28692 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 07:16:02.019] W0103 07:16:02.018703   28692 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 07:16:02.019] I0103 07:16:02.018918   28692 prometheus.go:325] Dumping monitoring/prometheus-k8s events...
W0103 07:16:02.056] I0103 07:16:02.056050   28692 prometheus.go:336] {
W0103 07:16:02.056]   "metadata": {
W0103 07:16:02.056]     "selfLink": "/api/v1/namespaces/monitoring/events",
W0103 07:16:02.056]     "resourceVersion": "74621"
W0103 07:16:02.056]   },
... skipping 57 lines ...
W0103 07:16:02.066]       "eventTime": null,
W0103 07:16:02.066]       "reportingComponent": "",
W0103 07:16:02.066]       "reportingInstance": ""
W0103 07:16:02.066]     }
W0103 07:16:02.066]   ]
W0103 07:16:02.066] }
W0103 07:16:02.067] F0103 07:16:02.056094   28692 clusterloader.go:248] Error while setting up prometheus stack: timed out waiting for the condition
W0103 07:16:02.095] 2020/01/03 07:16:02 process.go:155: Step '/go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1212987706812928000 --nodes=5000 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml --testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml --testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml --testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml --testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml --testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml' finished in 15m40.15832847s
W0103 07:16:02.096] 2020/01/03 07:16:02 e2e.go:531: Dumping logs from nodes to GCS directly at path: gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1212987706812928000/artifacts
W0103 07:16:02.096] 2020/01/03 07:16:02 process.go:153: Running: ./cluster/log-dump/log-dump.sh /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1212987706812928000/artifacts
W0103 07:16:02.098] 2020/01/03 07:16:02 process.go:153: Running: ./test/kubemark/master-log-dump.sh /workspace/_artifacts
W0103 07:16:02.196] Trying to find master named 'kubemark-5000-master'
W0103 07:16:02.196] Looking for address 'kubemark-5000-master-internal-ip'
... skipping 22 lines ...
W0103 07:16:45.893] 
W0103 07:16:45.894] Specify --start=47711 in the next get-serial-port-output invocation to get only the new output starting from here.
W0103 07:16:52.202] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0103 07:16:52.270] scp: /var/log/fluentd.log*: No such file or directory
W0103 07:16:52.271] scp: /var/log/kubelet.cov*: No such file or directory
W0103 07:16:52.271] scp: /var/log/startupscript.log*: No such file or directory
W0103 07:16:52.277] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0103 07:16:52.425] Dumping logs from nodes to GCS directly at 'gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1212987706812928000/artifacts' using logexporter
I0103 07:16:52.425] Detecting nodes in the cluster
I0103 07:16:57.433] namespace/logexporter created
I0103 07:16:57.472] secret/google-service-account created
I0103 07:16:57.510] daemonset.apps/logexporter created
W0103 07:16:58.574] CommandException: One or more URLs matched no objects.
W0103 07:17:14.922] CommandException: One or more URLs matched no objects.
W0103 07:17:21.492] scp: /var/log/glbc.log*: No such file or directory
W0103 07:17:21.492] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0103 07:17:21.562] scp: /var/log/fluentd.log*: No such file or directory
W0103 07:17:21.563] scp: /var/log/kubelet.cov*: No such file or directory
W0103 07:17:21.563] scp: /var/log/startupscript.log*: No such file or directory
W0103 07:17:21.570] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0103 07:17:21.688] Skipping dumping of node logs
W0103 07:17:21.789] 2020/01/03 07:17:21 process.go:155: Step './test/kubemark/master-log-dump.sh /workspace/_artifacts' finished in 1m19.591482255s
W0103 07:17:21.789] 2020/01/03 07:17:21 process.go:153: Running: ./test/kubemark/stop-kubemark.sh
I0103 07:17:31.477] Successfully listed marker files for successful nodes
I0103 07:17:47.809] Successfully listed marker files for successful nodes
I0103 07:17:48.409] Fetching logs from logexporter-2gsq8 running on kubemark-5000-minion-group-tdz6
... skipping 242 lines ...
W0103 07:26:54.930] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/zones/us-east1-b/instances/kubemark-5000-kubemark-master].
W0103 07:26:58.040] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-default-internal-node].
W0103 07:26:58.244] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-minion-all].
W0103 07:26:59.173] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-master-etcd].
W0103 07:27:00.327] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-master-https].
I0103 07:27:01.668] Deleting custom subnet...
W0103 07:27:03.190] ERROR: (gcloud.compute.networks.subnets.delete) Could not fetch resource:
W0103 07:27:03.191]  - The subnetwork resource 'projects/kubemark-scalability-testing/regions/us-east1/subnetworks/kubemark-5000-custom-subnet' is already being used by 'projects/kubemark-scalability-testing/regions/us-east1/addresses/kubemark-5000-kubemark-master-internal-ip'
W0103 07:27:03.191] 
W0103 07:27:07.138] ERROR: (gcloud.compute.networks.delete) Could not fetch resource:
W0103 07:27:07.139]  - The network resource 'projects/kubemark-scalability-testing/global/networks/kubemark-5000' is already being used by 'projects/kubemark-scalability-testing/regions/us-east1/subnetworks/kubemark-5000-custom-subnet'
W0103 07:27:07.139] 
I0103 07:27:07.239] Failed to delete network 'kubemark-5000'. Listing firewall-rules:
W0103 07:27:08.081] 
W0103 07:27:08.082] To show all fields of the firewall, please show in JSON format: --format=json
W0103 07:27:08.082] To show all fields in table format, please see the examples in --help.
W0103 07:27:08.082] 
W0103 07:27:08.392] W0103 07:27:08.391948   36413 loader.go:223] Config not found: /go/src/k8s.io/kubernetes/kubernetes/test/kubemark/resources/kubeconfig.kubemark
W0103 07:27:08.597] W0103 07:27:08.594919   36462 loader.go:223] Config not found: /go/src/k8s.io/kubernetes/kubernetes/test/kubemark/resources/kubeconfig.kubemark
... skipping 18 lines ...
I0103 07:27:23.601] Property "users.kubemark-scalability-testing_kubemark-5000-kubemark-basic-auth" unset.
I0103 07:27:23.804] Property "contexts.kubemark-scalability-testing_kubemark-5000-kubemark" unset.
I0103 07:27:23.815] Cleared config for kubemark-scalability-testing_kubemark-5000-kubemark from /workspace/.kube/config
I0103 07:27:23.815] Done
W0103 07:27:23.851] 2020/01/03 07:27:23 process.go:155: Step './test/kubemark/stop-kubemark.sh' finished in 10m2.127767685s
W0103 07:27:23.851] 2020/01/03 07:27:23 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0103 07:27:23.852] 2020/01/03 07:27:23 main.go:319: Something went wrong: encountered 1 errors: [error during /go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1212987706812928000 --nodes=5000 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml --testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml --testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml --testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml --testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml --testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml: exit status 1]
W0103 07:27:23.852] Traceback (most recent call last):
W0103 07:27:23.852]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 778, in <module>
W0103 07:27:23.852]     main(parse_args())
W0103 07:27:23.853]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 626, in main
W0103 07:27:23.853]     mode.start(runner_args)
W0103 07:27:23.853]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0103 07:27:23.853]     check_env(env, self.command, *args)
W0103 07:27:23.853]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0103 07:27:23.853]     subprocess.check_call(cmd, env=env)
W0103 07:27:23.853]   File "/usr/lib/python2.7/subprocess.py", line 190, in check_call
W0103 07:27:23.853]     raise CalledProcessError(retcode, cmd)
W0103 07:27:23.855] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--provider=gce', '--cluster=kubemark-5000', '--gcp-network=kubemark-5000', '--extract=ci/latest', '--gcp-node-image=gci', '--gcp-node-size=n1-standard-8', '--gcp-nodes=84', '--gcp-project=kubemark-scalability-testing', '--gcp-zone=us-east1-b', '--kubemark', '--kubemark-nodes=5000', '--test_args=--ginkgo.focus=xxxx', '--test-cmd=/go/src/k8s.io/perf-tests/run-e2e.sh', '--test-cmd-args=cluster-loader2', '--test-cmd-args=--experimental-gcp-snapshot-prometheus-disk=true', '--test-cmd-args=--experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1212987706812928000', '--test-cmd-args=--nodes=5000', '--test-cmd-args=--provider=kubemark', '--test-cmd-args=--report-dir=/workspace/_artifacts', '--test-cmd-args=--testconfig=testing/density/config.yaml', '--test-cmd-args=--testconfig=testing/load/config.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/enable_restart_count_check.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml', '--test-cmd-name=ClusterLoaderV2', '--timeout=1080m', '--logexporter-gcs-path=gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1212987706812928000/artifacts')' returned non-zero exit status 1
E0103 07:27:23.855] Command failed
I0103 07:27:23.856] process 511 exited with code 1 after 42.1m
E0103 07:27:23.856] FAIL: ci-kubernetes-kubemark-gce-scale
I0103 07:27:23.856] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0103 07:27:24.450] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0103 07:27:24.525] process 36997 exited with code 0 after 0.0m
I0103 07:27:24.525] Call:  gcloud config get-value account
I0103 07:27:24.903] process 37010 exited with code 0 after 0.0m
I0103 07:27:24.904] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
... skipping 21 lines ...