This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 17 succeeded
Started2020-01-09 15:57
Elapsed56m45s
Revision
Buildergke-prow-default-pool-cf4891d4-r8nq
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/21f0664d-d4be-4c77-b981-1fd28d71c727/targets/test'}}
pod90d786c9-32f8-11ea-9709-02f27a93e62e
resultstorehttps://source.cloud.google.com/results/invocations/21f0664d-d4be-4c77-b981-1fd28d71c727/targets/test
infra-commit33b48a710
job-versionv1.18.0-alpha.1.517+32d8799ef19ba2
pod90d786c9-32f8-11ea-9709-02f27a93e62e
repok8s.io/kubernetes
repo-commit32d8799ef19ba257950c4b7f50e5c50d678837f1
repos{u'k8s.io/kubernetes': u'master', u'k8s.io/perf-tests': u'master'}
revisionv1.18.0-alpha.1.517+32d8799ef19ba2

Test Failures


ClusterLoaderV2 15m31s

error during /go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1215301413748346881 --nodes=5000 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml --testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml --testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml --testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml --testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml --testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 17 Passed Tests

Error lines from build-log.txt

... skipping 427 lines ...
W0109 16:02:40.796] Trying to find master named 'kubemark-5000-master'
W0109 16:02:40.796] Looking for address 'kubemark-5000-master-ip'
W0109 16:02:41.690] Looking for address 'kubemark-5000-master-internal-ip'
I0109 16:02:42.595] Waiting up to 300 seconds for cluster initialization.
I0109 16:02:42.596] 
I0109 16:02:42.596]   This will continually check to see if the API for kubernetes is reachable.
I0109 16:02:42.596]   This may time out if there was some uncaught error during start up.
I0109 16:02:42.596] 
W0109 16:02:42.696] Using master: kubemark-5000-master (external IP: 35.243.232.192; internal IP: 10.40.0.2)
I0109 16:02:42.797] Kubernetes cluster created.
I0109 16:02:42.905] Cluster "kubemark-scalability-testing_kubemark-5000" set.
I0109 16:02:43.072] User "kubemark-scalability-testing_kubemark-5000" set.
I0109 16:02:43.248] Context "kubemark-scalability-testing_kubemark-5000" created.
... skipping 102 lines ...
I0109 16:03:39.327] kubemark-5000-minion-group-z8jr   Ready                      <none>   22s   v1.18.0-alpha.1.517+32d8799ef19ba2
I0109 16:03:39.327] kubemark-5000-minion-group-zc6j   Ready                      <none>   24s   v1.18.0-alpha.1.517+32d8799ef19ba2
I0109 16:03:39.327] kubemark-5000-minion-group-zk60   Ready                      <none>   21s   v1.18.0-alpha.1.517+32d8799ef19ba2
I0109 16:03:39.328] kubemark-5000-minion-group-zprs   Ready                      <none>   18s   v1.18.0-alpha.1.517+32d8799ef19ba2
I0109 16:03:39.328] kubemark-5000-minion-heapster     Ready                      <none>   38s   v1.18.0-alpha.1.517+32d8799ef19ba2
I0109 16:03:39.628] Validate output:
I0109 16:03:39.904] NAME                 STATUS    MESSAGE             ERROR
I0109 16:03:39.904] etcd-1               Healthy   {"health":"true"}   
I0109 16:03:39.905] controller-manager   Healthy   ok                  
I0109 16:03:39.905] scheduler            Healthy   ok                  
I0109 16:03:39.905] etcd-0               Healthy   {"health":"true"}   
I0109 16:03:39.912] Cluster validation succeeded
W0109 16:03:40.012] Done, listing cluster services:
... skipping 220 lines ...
W0109 16:06:05.212] Looking for address 'kubemark-5000-kubemark-master-ip'
W0109 16:06:06.308] Looking for address 'kubemark-5000-kubemark-master-internal-ip'
W0109 16:06:07.301] Using master: kubemark-5000-kubemark-master (external IP: 34.73.62.71; internal IP: 10.40.3.216)
I0109 16:06:07.401] Waiting up to 300 seconds for cluster initialization.
I0109 16:06:07.402] 
I0109 16:06:07.402]   This will continually check to see if the API for kubernetes is reachable.
I0109 16:06:07.402]   This may time out if there was some uncaught error during start up.
I0109 16:06:07.402] 
I0109 16:06:45.102] ............Kubernetes cluster created.
I0109 16:06:45.270] Cluster "kubemark-scalability-testing_kubemark-5000-kubemark" set.
I0109 16:06:45.431] User "kubemark-scalability-testing_kubemark-5000-kubemark" set.
I0109 16:06:45.618] Context "kubemark-scalability-testing_kubemark-5000-kubemark" created.
I0109 16:06:45.790] Switched to context "kubemark-scalability-testing_kubemark-5000-kubemark".
... skipping 22 lines ...
I0109 16:07:29.482] NAME                            STATUS                        ROLES    AGE   VERSION
I0109 16:07:29.482] kubemark-5000-kubemark-master   NotReady,SchedulingDisabled   <none>   25s   v1.18.0-alpha.1.517+32d8799ef19ba2
I0109 16:07:29.488] Found 1 node(s).
I0109 16:07:29.761] NAME                            STATUS                        ROLES    AGE   VERSION
I0109 16:07:29.761] kubemark-5000-kubemark-master   NotReady,SchedulingDisabled   <none>   25s   v1.18.0-alpha.1.517+32d8799ef19ba2
I0109 16:07:30.081] Validate output:
I0109 16:07:30.387] NAME                 STATUS    MESSAGE             ERROR
I0109 16:07:30.388] scheduler            Healthy   ok                  
I0109 16:07:30.388] controller-manager   Healthy   ok                  
I0109 16:07:30.388] etcd-1               Healthy   {"health":"true"}   
I0109 16:07:30.388] etcd-0               Healthy   {"health":"true"}   
I0109 16:07:30.394] Cluster validation encountered some problems, but cluster should be in working order
W0109 16:07:30.495] ...ignoring non-fatal errors in validate-cluster
W0109 16:07:30.495] Done, listing cluster services:
W0109 16:07:30.495] 
I0109 16:07:30.673] Kubernetes master is running at https://34.73.62.71
I0109 16:07:30.673] 
I0109 16:07:30.673] To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
I0109 16:07:30.680] 
... skipping 5143 lines ...
W0109 16:13:28.208] I0109 16:13:28.208035   29309 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/prometheus-serviceMonitor.yaml
W0109 16:13:28.247] I0109 16:13:28.247405   29309 prometheus.go:201] Exposing kube-apiserver metrics in kubemark cluster
W0109 16:13:28.395] I0109 16:13:28.395420   29309 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-endpoints.yaml
W0109 16:13:28.433] I0109 16:13:28.433565   29309 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-service.yaml
W0109 16:13:28.471] I0109 16:13:28.470666   29309 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-serviceMonitor.yaml
W0109 16:13:28.509] I0109 16:13:28.508928   29309 prometheus.go:277] Waiting for Prometheus stack to become healthy...
W0109 16:13:58.547] W0109 16:13:58.546647   29309 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0109 16:14:28.548] W0109 16:14:28.547743   29309 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0109 16:14:58.547] W0109 16:14:58.547546   29309 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0109 16:15:28.547] W0109 16:15:28.547403   29309 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0109 16:15:58.548] W0109 16:15:58.547824   29309 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0109 16:16:28.548] W0109 16:16:28.548420   29309 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0109 16:16:58.548] W0109 16:16:58.548370   29309 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0109 16:17:28.548] W0109 16:17:28.548470   29309 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0109 16:17:58.547] W0109 16:17:58.547605   29309 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0109 16:18:28.548] W0109 16:18:28.548031   29309 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0109 16:18:58.551] W0109 16:18:58.550731   29309 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0109 16:19:28.550] W0109 16:19:28.548423   29309 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0109 16:19:58.550] W0109 16:19:58.550130   29309 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0109 16:20:28.547] W0109 16:20:28.547719   29309 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0109 16:20:58.547] W0109 16:20:58.547562   29309 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0109 16:21:28.548] W0109 16:21:28.548282   29309 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0109 16:21:58.549] W0109 16:21:58.548819   29309 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0109 16:22:28.548] W0109 16:22:28.547799   29309 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0109 16:22:58.548] W0109 16:22:58.548081   29309 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0109 16:23:28.548] W0109 16:23:28.547771   29309 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0109 16:23:58.547] W0109 16:23:58.547322   29309 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0109 16:24:28.548] W0109 16:24:28.547904   29309 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0109 16:24:58.547] W0109 16:24:58.547594   29309 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0109 16:25:28.548] W0109 16:25:28.547960   29309 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0109 16:25:58.547] W0109 16:25:58.547249   29309 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0109 16:26:28.547] W0109 16:26:28.547643   29309 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0109 16:26:58.548] W0109 16:26:58.548170   29309 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0109 16:27:28.548] W0109 16:27:28.548523   29309 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0109 16:27:58.550] W0109 16:27:58.549812   29309 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0109 16:28:28.547] W0109 16:28:28.547452   29309 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0109 16:28:28.584] W0109 16:28:28.584182   29309 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0109 16:28:28.584] I0109 16:28:28.584214   29309 prometheus.go:325] Dumping monitoring/prometheus-k8s events...
W0109 16:28:28.621] I0109 16:28:28.621107   29309 prometheus.go:336] {
W0109 16:28:28.621]   "metadata": {
W0109 16:28:28.621]     "selfLink": "/api/v1/namespaces/monitoring/events",
W0109 16:28:28.621]     "resourceVersion": "74608"
W0109 16:28:28.621]   },
... skipping 57 lines ...
W0109 16:28:28.630]       "eventTime": null,
W0109 16:28:28.631]       "reportingComponent": "",
W0109 16:28:28.631]       "reportingInstance": ""
W0109 16:28:28.631]     }
W0109 16:28:28.631]   ]
W0109 16:28:28.631] }
W0109 16:28:28.632] F0109 16:28:28.621159   29309 clusterloader.go:248] Error while setting up prometheus stack: timed out waiting for the condition
W0109 16:28:28.656] 2020/01/09 16:28:28 process.go:155: Step '/go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1215301413748346881 --nodes=5000 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml --testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml --testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml --testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml --testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml --testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml' finished in 15m31.734217098s
W0109 16:28:28.657] 2020/01/09 16:28:28 e2e.go:531: Dumping logs from nodes to GCS directly at path: gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1215301413748346881/artifacts
W0109 16:28:28.657] 2020/01/09 16:28:28 process.go:153: Running: ./cluster/log-dump/log-dump.sh /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1215301413748346881/artifacts
W0109 16:28:28.657] 2020/01/09 16:28:28 process.go:153: Running: ./test/kubemark/master-log-dump.sh /workspace/_artifacts
W0109 16:28:28.739] Trying to find master named 'kubemark-5000-master'
W0109 16:28:28.739] Looking for address 'kubemark-5000-master-internal-ip'
... skipping 22 lines ...
W0109 16:29:09.499] 
W0109 16:29:09.499] Specify --start=47768 in the next get-serial-port-output invocation to get only the new output starting from here.
W0109 16:29:15.883] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0109 16:29:15.951] scp: /var/log/fluentd.log*: No such file or directory
W0109 16:29:15.951] scp: /var/log/kubelet.cov*: No such file or directory
W0109 16:29:15.951] scp: /var/log/startupscript.log*: No such file or directory
W0109 16:29:15.956] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0109 16:29:16.056] Dumping logs from nodes to GCS directly at 'gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1215301413748346881/artifacts' using logexporter
I0109 16:29:16.056] Detecting nodes in the cluster
I0109 16:29:20.828] namespace/logexporter created
I0109 16:29:20.866] secret/google-service-account created
I0109 16:29:20.908] daemonset.apps/logexporter created
W0109 16:29:21.892] CommandException: One or more URLs matched no objects.
W0109 16:29:38.088] CommandException: One or more URLs matched no objects.
W0109 16:29:43.910] scp: /var/log/glbc.log*: No such file or directory
W0109 16:29:43.910] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0109 16:29:43.979] scp: /var/log/fluentd.log*: No such file or directory
W0109 16:29:43.979] scp: /var/log/kubelet.cov*: No such file or directory
W0109 16:29:43.979] scp: /var/log/startupscript.log*: No such file or directory
W0109 16:29:43.983] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0109 16:29:44.079] 2020/01/09 16:29:44 process.go:155: Step './test/kubemark/master-log-dump.sh /workspace/_artifacts' finished in 1m15.423411131s
W0109 16:29:44.080] 2020/01/09 16:29:44 process.go:153: Running: ./test/kubemark/stop-kubemark.sh
I0109 16:29:44.180] Skipping dumping of node logs
I0109 16:29:54.476] Successfully listed marker files for successful nodes
I0109 16:30:10.701] Successfully listed marker files for successful nodes
I0109 16:30:11.224] Fetching logs from logexporter-22qvp running on kubemark-5000-minion-group-n3c9
... skipping 230 lines ...
I0109 16:35:29.064] Property "users.kubemark-scalability-testing_kubemark-5000-kubemark-basic-auth" unset.
I0109 16:35:29.233] Property "contexts.kubemark-scalability-testing_kubemark-5000-kubemark" unset.
I0109 16:35:29.239] Cleared config for kubemark-scalability-testing_kubemark-5000-kubemark from /workspace/.kube/config
I0109 16:35:29.240] Done
W0109 16:35:29.340] 2020/01/09 16:35:29 process.go:155: Step './test/kubemark/stop-kubemark.sh' finished in 5m45.165561738s
W0109 16:37:03.257] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/zones/us-east1-b/instances/kubemark-5000-minion-heapster].
W0109 16:43:01.972] Failed to execute 'curl -s --cacert /etc/srv/kubernetes/pki/etcd-apiserver-ca.crt --cert /etc/srv/kubernetes/pki/etcd-apiserver-client.crt --key /etc/srv/kubernetes/pki/etcd-apiserver-client.key https://127.0.0.1:2379/v2/members/$(curl -s --cacert /etc/srv/kubernetes/pki/etcd-apiserver-ca.crt --cert /etc/srv/kubernetes/pki/etcd-apiserver-client.crt --key /etc/srv/kubernetes/pki/etcd-apiserver-client.key https://127.0.0.1:2379/v2/members -XGET | sed 's/{\"id/\n/g' | grep kubemark-5000-master\" | cut -f 3 -d \") -XDELETE -L 2>/dev/null' on kubemark-5000-master despite 5 attempts
W0109 16:43:01.973] Last attempt failed with: ssh: connect to host 35.243.232.192 port 22: Connection timed out

W0109 16:43:01.973] ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
I0109 16:43:02.073] Removing etcd replica, name: kubemark-5000-master, port: 2379, result: 1
W0109 16:48:58.069] Failed to execute 'curl -s  http://127.0.0.1:4002/v2/members/$(curl -s  http://127.0.0.1:4002/v2/members -XGET | sed 's/{\"id/\n/g' | grep kubemark-5000-master\" | cut -f 3 -d \") -XDELETE -L 2>/dev/null' on kubemark-5000-master despite 5 attempts
W0109 16:48:58.070] Last attempt failed with: ssh: connect to host 35.243.232.192 port 22: Connection timed out

W0109 16:48:58.070] ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
I0109 16:48:58.170] Removing etcd replica, name: kubemark-5000-master, port: 4002, result: 1
W0109 16:49:03.926] Updated [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/zones/us-east1-b/instances/kubemark-5000-master].
W0109 16:51:04.124] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/zones/us-east1-b/instances/kubemark-5000-master].
W0109 16:51:15.899] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-master-https].
W0109 16:51:17.113] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-master-etcd].
W0109 16:51:17.791] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-minion-all].
... skipping 21 lines ...
I0109 16:52:32.704] Cleared config for kubemark-scalability-testing_kubemark-5000 from /go/src/k8s.io/kubernetes/kubernetes/test/kubemark/resources/kubeconfig.kubemark
I0109 16:52:32.704] Done
W0109 16:52:32.735] W0109 16:52:32.698185   37671 loader.go:223] Config not found: /go/src/k8s.io/kubernetes/kubernetes/test/kubemark/resources/kubeconfig.kubemark
W0109 16:52:32.735] W0109 16:52:32.698349   37671 loader.go:223] Config not found: /go/src/k8s.io/kubernetes/kubernetes/test/kubemark/resources/kubeconfig.kubemark
W0109 16:52:32.735] 2020/01/09 16:52:32 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 20m33.359194004s
W0109 16:52:32.735] 2020/01/09 16:52:32 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0109 16:52:32.736] 2020/01/09 16:52:32 main.go:316: Something went wrong: encountered 1 errors: [error during /go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1215301413748346881 --nodes=5000 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml --testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml --testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml --testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml --testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml --testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml: exit status 1]
W0109 16:52:32.736] Traceback (most recent call last):
W0109 16:52:32.736]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 778, in <module>
W0109 16:52:32.737]     main(parse_args())
W0109 16:52:32.737]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 626, in main
W0109 16:52:32.737]     mode.start(runner_args)
W0109 16:52:32.737]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0109 16:52:32.737]     check_env(env, self.command, *args)
W0109 16:52:32.737]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0109 16:52:32.737]     subprocess.check_call(cmd, env=env)
W0109 16:52:32.738]   File "/usr/lib/python2.7/subprocess.py", line 190, in check_call
W0109 16:52:32.738]     raise CalledProcessError(retcode, cmd)
W0109 16:52:32.739] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--provider=gce', '--cluster=kubemark-5000', '--gcp-network=kubemark-5000', '--extract=ci/latest', '--gcp-node-image=gci', '--gcp-node-size=n1-standard-8', '--gcp-nodes=84', '--gcp-project=kubemark-scalability-testing', '--gcp-zone=us-east1-b', '--kubemark', '--kubemark-nodes=5000', '--test_args=--ginkgo.focus=xxxx', '--test-cmd=/go/src/k8s.io/perf-tests/run-e2e.sh', '--test-cmd-args=cluster-loader2', '--test-cmd-args=--experimental-gcp-snapshot-prometheus-disk=true', '--test-cmd-args=--experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1215301413748346881', '--test-cmd-args=--nodes=5000', '--test-cmd-args=--provider=kubemark', '--test-cmd-args=--report-dir=/workspace/_artifacts', '--test-cmd-args=--testconfig=testing/density/config.yaml', '--test-cmd-args=--testconfig=testing/load/config.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/enable_restart_count_check.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml', '--test-cmd-name=ClusterLoaderV2', '--timeout=1080m', '--logexporter-gcs-path=gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1215301413748346881/artifacts')' returned non-zero exit status 1
E0109 16:52:32.740] Command failed
I0109 16:52:32.740] process 514 exited with code 1 after 54.1m
E0109 16:52:32.740] FAIL: ci-kubernetes-kubemark-gce-scale
I0109 16:52:32.740] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0109 16:52:33.255] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0109 16:52:33.310] process 37682 exited with code 0 after 0.0m
I0109 16:52:33.310] Call:  gcloud config get-value account
I0109 16:52:33.650] process 37695 exited with code 0 after 0.0m
I0109 16:52:33.651] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
... skipping 21 lines ...