This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 17 succeeded
Started2020-01-07 15:27
Elapsed55m34s
Revision
Buildergke-prow-default-pool-cf4891d4-d9dz
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/ccb8f6fa-84ac-407e-bf54-d49cc75fcc51/targets/test'}}
podf0ab96b0-3161-11ea-bef5-ca9a35e1927f
resultstorehttps://source.cloud.google.com/results/invocations/ccb8f6fa-84ac-407e-bf54-d49cc75fcc51/targets/test
infra-commita8c31a850
job-versionv1.18.0-alpha.1.409+f3df7a2fdb16dd
podf0ab96b0-3161-11ea-bef5-ca9a35e1927f
repok8s.io/kubernetes
repo-commitf3df7a2fdb16dd7b00a9d357e337ed48a6f70d45
repos{u'k8s.io/kubernetes': u'master', u'k8s.io/perf-tests': u'master'}
revisionv1.18.0-alpha.1.409+f3df7a2fdb16dd

Test Failures


ClusterLoaderV2 15m33s

error during /go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1214568804391063563 --nodes=5000 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml --testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml --testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml --testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml --testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml --testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 17 Passed Tests

Error lines from build-log.txt

... skipping 427 lines ...
W0107 15:33:00.492] Looking for address 'kubemark-5000-master-ip'
W0107 15:33:01.370] Looking for address 'kubemark-5000-master-internal-ip'
W0107 15:33:02.347] Using master: kubemark-5000-master (external IP: 35.231.150.150; internal IP: 10.40.0.2)
I0107 15:33:02.447] Waiting up to 300 seconds for cluster initialization.
I0107 15:33:02.448] 
I0107 15:33:02.448]   This will continually check to see if the API for kubernetes is reachable.
I0107 15:33:02.448]   This may time out if there was some uncaught error during start up.
I0107 15:33:02.449] 
I0107 15:33:11.663] .Kubernetes cluster created.
I0107 15:33:11.842] Cluster "kubemark-scalability-testing_kubemark-5000" set.
I0107 15:33:12.041] User "kubemark-scalability-testing_kubemark-5000" set.
I0107 15:33:12.260] Context "kubemark-scalability-testing_kubemark-5000" created.
I0107 15:33:12.471] Switched to context "kubemark-scalability-testing_kubemark-5000".
... skipping 102 lines ...
I0107 15:34:09.494] kubemark-5000-minion-group-wtl6   Ready                      <none>   19s   v1.18.0-alpha.1.409+f3df7a2fdb16dd
I0107 15:34:09.494] kubemark-5000-minion-group-wx5j   Ready                      <none>   26s   v1.18.0-alpha.1.409+f3df7a2fdb16dd
I0107 15:34:09.494] kubemark-5000-minion-group-wzpf   Ready                      <none>   26s   v1.18.0-alpha.1.409+f3df7a2fdb16dd
I0107 15:34:09.495] kubemark-5000-minion-group-zp00   Ready                      <none>   18s   v1.18.0-alpha.1.409+f3df7a2fdb16dd
I0107 15:34:09.495] kubemark-5000-minion-heapster     Ready                      <none>   28s   v1.18.0-alpha.1.409+f3df7a2fdb16dd
I0107 15:34:09.779] Validate output:
I0107 15:34:10.075] NAME                 STATUS    MESSAGE             ERROR
I0107 15:34:10.075] controller-manager   Healthy   ok                  
I0107 15:34:10.075] scheduler            Healthy   ok                  
I0107 15:34:10.076] etcd-1               Healthy   {"health":"true"}   
I0107 15:34:10.076] etcd-0               Healthy   {"health":"true"}   
I0107 15:34:10.079] Cluster validation succeeded
W0107 15:34:10.180] Done, listing cluster services:
... skipping 220 lines ...
W0107 15:36:47.089] Looking for address 'kubemark-5000-kubemark-master-ip'
W0107 15:36:48.265] Looking for address 'kubemark-5000-kubemark-master-internal-ip'
W0107 15:36:49.442] Using master: kubemark-5000-kubemark-master (external IP: 35.243.232.192; internal IP: 10.40.3.216)
I0107 15:36:49.542] Waiting up to 300 seconds for cluster initialization.
I0107 15:36:49.542] 
I0107 15:36:49.543]   This will continually check to see if the API for kubernetes is reachable.
I0107 15:36:49.543]   This may time out if there was some uncaught error during start up.
I0107 15:36:49.543] 
I0107 15:37:14.210] .........Kubernetes cluster created.
I0107 15:37:14.377] Cluster "kubemark-scalability-testing_kubemark-5000-kubemark" set.
I0107 15:37:14.557] User "kubemark-scalability-testing_kubemark-5000-kubemark" set.
I0107 15:37:14.714] Context "kubemark-scalability-testing_kubemark-5000-kubemark" created.
I0107 15:37:14.884] Switched to context "kubemark-scalability-testing_kubemark-5000-kubemark".
... skipping 22 lines ...
I0107 15:37:55.785] NAME                            STATUS                        ROLES    AGE   VERSION
I0107 15:37:55.785] kubemark-5000-kubemark-master   NotReady,SchedulingDisabled   <none>   20s   v1.18.0-alpha.1.409+f3df7a2fdb16dd
I0107 15:37:55.789] Found 1 node(s).
I0107 15:37:56.048] NAME                            STATUS                     ROLES    AGE   VERSION
I0107 15:37:56.048] kubemark-5000-kubemark-master   Ready,SchedulingDisabled   <none>   21s   v1.18.0-alpha.1.409+f3df7a2fdb16dd
I0107 15:37:56.329] Validate output:
I0107 15:37:56.584] NAME                 STATUS    MESSAGE             ERROR
I0107 15:37:56.584] scheduler            Healthy   ok                  
I0107 15:37:56.584] controller-manager   Healthy   ok                  
I0107 15:37:56.584] etcd-1               Healthy   {"health":"true"}   
I0107 15:37:56.584] etcd-0               Healthy   {"health":"true"}   
I0107 15:37:56.587] Cluster validation encountered some problems, but cluster should be in working order
W0107 15:37:56.688] ...ignoring non-fatal errors in validate-cluster
W0107 15:37:56.688] Done, listing cluster services:
W0107 15:37:56.688] 
I0107 15:37:56.849] Kubernetes master is running at https://35.243.232.192
I0107 15:37:56.849] 
I0107 15:37:56.850] To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
I0107 15:37:56.853] 
... skipping 5142 lines ...
W0107 15:43:52.063] I0107 15:43:52.062894   29430 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/prometheus-serviceMonitor.yaml
W0107 15:43:52.101] I0107 15:43:52.101337   29430 prometheus.go:201] Exposing kube-apiserver metrics in kubemark cluster
W0107 15:43:52.248] I0107 15:43:52.248398   29430 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-endpoints.yaml
W0107 15:43:52.289] I0107 15:43:52.288907   29430 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-service.yaml
W0107 15:43:52.330] I0107 15:43:52.330723   29430 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-serviceMonitor.yaml
W0107 15:43:52.370] I0107 15:43:52.370305   29430 prometheus.go:277] Waiting for Prometheus stack to become healthy...
W0107 15:44:22.409] W0107 15:44:22.409097   29430 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0107 15:44:52.410] W0107 15:44:52.410144   29430 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0107 15:45:22.409] W0107 15:45:22.409570   29430 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0107 15:45:52.961] W0107 15:45:52.961114   29430 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0107 15:46:22.411] W0107 15:46:22.411270   29430 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0107 15:46:52.411] W0107 15:46:52.411227   29430 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0107 15:47:22.411] W0107 15:47:22.411670   29430 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0107 15:47:52.412] W0107 15:47:52.412341   29430 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0107 15:48:22.411] W0107 15:48:22.411087   29430 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0107 15:48:52.412] W0107 15:48:52.412316   29430 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0107 15:49:22.412] W0107 15:49:22.411748   29430 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0107 15:49:52.412] W0107 15:49:52.412253   29430 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0107 15:50:22.411] W0107 15:50:22.410949   29430 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0107 15:50:52.412] W0107 15:50:52.412189   29430 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0107 15:51:22.411] W0107 15:51:22.411372   29430 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0107 15:51:52.411] W0107 15:51:52.411100   29430 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0107 15:52:22.410] W0107 15:52:22.409990   29430 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0107 15:52:52.410] W0107 15:52:52.409740   29430 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0107 15:53:22.410] W0107 15:53:22.409624   29430 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0107 15:53:52.409] W0107 15:53:52.409515   29430 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0107 15:54:22.410] W0107 15:54:22.409949   29430 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0107 15:54:52.409] W0107 15:54:52.409483   29430 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0107 15:55:22.409] W0107 15:55:22.409570   29430 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0107 15:55:52.411] W0107 15:55:52.410627   29430 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0107 15:56:22.409] W0107 15:56:22.409322   29430 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0107 15:56:52.410] W0107 15:56:52.410292   29430 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0107 15:57:22.410] W0107 15:57:22.409715   29430 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0107 15:57:52.410] W0107 15:57:52.409923   29430 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0107 15:58:22.409] W0107 15:58:22.409481   29430 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0107 15:58:52.409] W0107 15:58:52.409175   29430 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0107 15:58:52.447] W0107 15:58:52.447354   29430 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0107 15:58:52.448] I0107 15:58:52.447433   29430 prometheus.go:325] Dumping monitoring/prometheus-k8s events...
W0107 15:58:52.484] I0107 15:58:52.484273   29430 prometheus.go:336] {
W0107 15:58:52.484]   "metadata": {
W0107 15:58:52.485]     "selfLink": "/api/v1/namespaces/monitoring/events",
W0107 15:58:52.485]     "resourceVersion": "74586"
W0107 15:58:52.485]   },
... skipping 57 lines ...
W0107 15:58:52.496]       "eventTime": null,
W0107 15:58:52.496]       "reportingComponent": "",
W0107 15:58:52.496]       "reportingInstance": ""
W0107 15:58:52.496]     }
W0107 15:58:52.496]   ]
W0107 15:58:52.496] }
W0107 15:58:52.496] F0107 15:58:52.484308   29430 clusterloader.go:248] Error while setting up prometheus stack: timed out waiting for the condition
W0107 15:58:52.509] 2020/01/07 15:58:52 process.go:155: Step '/go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1214568804391063563 --nodes=5000 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml --testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml --testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml --testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml --testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml --testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml' finished in 15m33.774271147s
W0107 15:58:52.510] 2020/01/07 15:58:52 e2e.go:531: Dumping logs from nodes to GCS directly at path: gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1214568804391063563/artifacts
W0107 15:58:52.510] 2020/01/07 15:58:52 process.go:153: Running: ./cluster/log-dump/log-dump.sh /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1214568804391063563/artifacts
W0107 15:58:52.512] 2020/01/07 15:58:52 process.go:153: Running: ./test/kubemark/master-log-dump.sh /workspace/_artifacts
W0107 15:58:52.582] Trying to find master named 'kubemark-5000-master'
W0107 15:58:52.583] Looking for address 'kubemark-5000-master-internal-ip'
... skipping 22 lines ...
W0107 15:59:32.782] 
W0107 15:59:32.782] Specify --start=47705 in the next get-serial-port-output invocation to get only the new output starting from here.
W0107 15:59:39.142] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0107 15:59:39.214] scp: /var/log/fluentd.log*: No such file or directory
W0107 15:59:39.214] scp: /var/log/kubelet.cov*: No such file or directory
W0107 15:59:39.214] scp: /var/log/startupscript.log*: No such file or directory
W0107 15:59:39.219] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0107 15:59:39.319] Dumping logs from nodes to GCS directly at 'gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1214568804391063563/artifacts' using logexporter
I0107 15:59:39.319] Detecting nodes in the cluster
I0107 15:59:43.645] namespace/logexporter created
I0107 15:59:43.684] secret/google-service-account created
I0107 15:59:43.723] daemonset.apps/logexporter created
W0107 15:59:44.604] CommandException: One or more URLs matched no objects.
W0107 16:00:00.695] CommandException: One or more URLs matched no objects.
W0107 16:00:06.416] scp: /var/log/glbc.log*: No such file or directory
W0107 16:00:06.417] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0107 16:00:06.486] scp: /var/log/fluentd.log*: No such file or directory
W0107 16:00:06.486] scp: /var/log/kubelet.cov*: No such file or directory
W0107 16:00:06.486] scp: /var/log/startupscript.log*: No such file or directory
W0107 16:00:06.489] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0107 16:00:06.588] 2020/01/07 16:00:06 process.go:155: Step './test/kubemark/master-log-dump.sh /workspace/_artifacts' finished in 1m14.076439414s
W0107 16:00:06.589] 2020/01/07 16:00:06 process.go:153: Running: ./test/kubemark/stop-kubemark.sh
I0107 16:00:06.689] Skipping dumping of node logs
I0107 16:00:17.043] Successfully listed marker files for successful nodes
I0107 16:00:33.126] Successfully listed marker files for successful nodes
I0107 16:00:33.685] Fetching logs from logexporter-25j7m running on kubemark-5000-minion-group-vxml
... skipping 230 lines ...
I0107 16:05:55.040] Property "users.kubemark-scalability-testing_kubemark-5000-kubemark-basic-auth" unset.
I0107 16:05:55.173] Property "contexts.kubemark-scalability-testing_kubemark-5000-kubemark" unset.
I0107 16:05:55.177] Cleared config for kubemark-scalability-testing_kubemark-5000-kubemark from /workspace/.kube/config
I0107 16:05:55.177] Done
W0107 16:05:55.277] 2020/01/07 16:05:55 process.go:155: Step './test/kubemark/stop-kubemark.sh' finished in 5m48.591909192s
W0107 16:06:07.770] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/zones/us-east1-b/instances/kubemark-5000-minion-heapster].
W0107 16:12:06.135] Failed to execute 'curl -s --cacert /etc/srv/kubernetes/pki/etcd-apiserver-ca.crt --cert /etc/srv/kubernetes/pki/etcd-apiserver-client.crt --key /etc/srv/kubernetes/pki/etcd-apiserver-client.key https://127.0.0.1:2379/v2/members/$(curl -s --cacert /etc/srv/kubernetes/pki/etcd-apiserver-ca.crt --cert /etc/srv/kubernetes/pki/etcd-apiserver-client.crt --key /etc/srv/kubernetes/pki/etcd-apiserver-client.key https://127.0.0.1:2379/v2/members -XGET | sed 's/{\"id/\n/g' | grep kubemark-5000-master\" | cut -f 3 -d \") -XDELETE -L 2>/dev/null' on kubemark-5000-master despite 5 attempts
W0107 16:12:06.135] Last attempt failed with: ssh: connect to host 35.231.150.150 port 22: Connection timed out

W0107 16:12:06.135] ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
I0107 16:12:06.236] Removing etcd replica, name: kubemark-5000-master, port: 2379, result: 1
W0107 16:18:02.002] Failed to execute 'curl -s  http://127.0.0.1:4002/v2/members/$(curl -s  http://127.0.0.1:4002/v2/members -XGET | sed 's/{\"id/\n/g' | grep kubemark-5000-master\" | cut -f 3 -d \") -XDELETE -L 2>/dev/null' on kubemark-5000-master despite 5 attempts
W0107 16:18:02.002] Last attempt failed with: ssh: connect to host 35.231.150.150 port 22: Connection timed out

W0107 16:18:02.002] ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
I0107 16:18:02.103] Removing etcd replica, name: kubemark-5000-master, port: 4002, result: 1
W0107 16:18:07.825] Updated [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/zones/us-east1-b/instances/kubemark-5000-master].
W0107 16:20:13.285] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/zones/us-east1-b/instances/kubemark-5000-master].
W0107 16:20:29.325] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-master-https].
W0107 16:20:30.112] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-master-etcd].
W0107 16:20:30.615] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-minion-all].
... skipping 21 lines ...
I0107 16:21:40.764] Cleared config for kubemark-scalability-testing_kubemark-5000 from /go/src/k8s.io/kubernetes/kubernetes/test/kubemark/resources/kubeconfig.kubemark
I0107 16:21:40.764] Done
W0107 16:21:40.784] W0107 16:21:40.759968   37808 loader.go:223] Config not found: /go/src/k8s.io/kubernetes/kubernetes/test/kubemark/resources/kubeconfig.kubemark
W0107 16:21:40.785] W0107 16:21:40.760151   37808 loader.go:223] Config not found: /go/src/k8s.io/kubernetes/kubernetes/test/kubemark/resources/kubeconfig.kubemark
W0107 16:21:40.785] 2020/01/07 16:21:40 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 19m18.65886392s
W0107 16:21:40.785] 2020/01/07 16:21:40 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0107 16:21:40.786] 2020/01/07 16:21:40 main.go:319: Something went wrong: encountered 1 errors: [error during /go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1214568804391063563 --nodes=5000 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml --testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml --testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml --testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml --testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml --testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml: exit status 1]
W0107 16:21:40.786] Traceback (most recent call last):
W0107 16:21:40.786]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 778, in <module>
W0107 16:21:40.787]     main(parse_args())
W0107 16:21:40.787]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 626, in main
W0107 16:21:40.787]     mode.start(runner_args)
W0107 16:21:40.787]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0107 16:21:40.787]     check_env(env, self.command, *args)
W0107 16:21:40.787]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0107 16:21:40.787]     subprocess.check_call(cmd, env=env)
W0107 16:21:40.788]   File "/usr/lib/python2.7/subprocess.py", line 190, in check_call
W0107 16:21:40.788]     raise CalledProcessError(retcode, cmd)
W0107 16:21:40.789] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--provider=gce', '--cluster=kubemark-5000', '--gcp-network=kubemark-5000', '--extract=ci/latest', '--gcp-node-image=gci', '--gcp-node-size=n1-standard-8', '--gcp-nodes=84', '--gcp-project=kubemark-scalability-testing', '--gcp-zone=us-east1-b', '--kubemark', '--kubemark-nodes=5000', '--test_args=--ginkgo.focus=xxxx', '--test-cmd=/go/src/k8s.io/perf-tests/run-e2e.sh', '--test-cmd-args=cluster-loader2', '--test-cmd-args=--experimental-gcp-snapshot-prometheus-disk=true', '--test-cmd-args=--experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1214568804391063563', '--test-cmd-args=--nodes=5000', '--test-cmd-args=--provider=kubemark', '--test-cmd-args=--report-dir=/workspace/_artifacts', '--test-cmd-args=--testconfig=testing/density/config.yaml', '--test-cmd-args=--testconfig=testing/load/config.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/enable_restart_count_check.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml', '--test-cmd-name=ClusterLoaderV2', '--timeout=1080m', '--logexporter-gcs-path=gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1214568804391063563/artifacts')' returned non-zero exit status 1
E0107 16:21:40.789] Command failed
I0107 16:21:40.790] process 511 exited with code 1 after 52.8m
E0107 16:21:40.790] FAIL: ci-kubernetes-kubemark-gce-scale
I0107 16:21:40.790] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0107 16:21:41.314] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0107 16:21:41.371] process 37820 exited with code 0 after 0.0m
I0107 16:21:41.371] Call:  gcloud config get-value account
I0107 16:21:41.714] process 37833 exited with code 0 after 0.0m
I0107 16:21:41.715] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
... skipping 21 lines ...