This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 17 succeeded
Started2020-01-05 06:47
Elapsed43m30s
Revision
Buildergke-prow-ssd-pool-1a225945-d6fn
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/7a005fde-1293-4f47-986d-eac2756ea5f3/targets/test'}}
pod1be6de66-2f87-11ea-a07b-c6eb1bf16817
resultstorehttps://source.cloud.google.com/results/invocations/7a005fde-1293-4f47-986d-eac2756ea5f3/targets/test
infra-commit235805981
job-versionv1.18.0-alpha.1.329+2d56d750617295
pod1be6de66-2f87-11ea-a07b-c6eb1bf16817
repok8s.io/kubernetes
repo-commit2d56d7506172956ec5cdde00173b8f4ad4e4b4e5
repos{u'k8s.io/kubernetes': u'master', u'k8s.io/perf-tests': u'master'}
revisionv1.18.0-alpha.1.329+2d56d750617295

Test Failures


ClusterLoaderV2 15m34s

error during /go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1213713525743030272 --nodes=5000 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml --testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml --testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml --testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml --testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml --testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 17 Passed Tests

Error lines from build-log.txt

... skipping 428 lines ...
W0105 06:53:57.306] Trying to find master named 'kubemark-5000-master'
W0105 06:53:57.306] Looking for address 'kubemark-5000-master-ip'
W0105 06:53:58.121] Looking for address 'kubemark-5000-master-internal-ip'
I0105 06:53:59.001] Waiting up to 300 seconds for cluster initialization.
I0105 06:53:59.002] 
I0105 06:53:59.002]   This will continually check to see if the API for kubernetes is reachable.
I0105 06:53:59.002]   This may time out if there was some uncaught error during start up.
I0105 06:53:59.002] 
W0105 06:53:59.103] Using master: kubemark-5000-master (external IP: 35.243.232.192; internal IP: 10.40.0.2)
I0105 06:54:01.864] Kubernetes cluster created.
I0105 06:54:02.046] Cluster "kubemark-scalability-testing_kubemark-5000" set.
I0105 06:54:02.224] User "kubemark-scalability-testing_kubemark-5000" set.
I0105 06:54:02.410] Context "kubemark-scalability-testing_kubemark-5000" created.
... skipping 103 lines ...
I0105 06:54:58.982] kubemark-5000-minion-group-wz9t   Ready                      <none>   13s   v1.18.0-alpha.1.329+2d56d750617295
I0105 06:54:58.982] kubemark-5000-minion-group-x2qf   Ready                      <none>   19s   v1.18.0-alpha.1.329+2d56d750617295
I0105 06:54:58.982] kubemark-5000-minion-group-xj7m   Ready                      <none>   18s   v1.18.0-alpha.1.329+2d56d750617295
I0105 06:54:58.982] kubemark-5000-minion-group-z99m   Ready                      <none>   14s   v1.18.0-alpha.1.329+2d56d750617295
I0105 06:54:58.982] kubemark-5000-minion-heapster     Ready                      <none>   25s   v1.18.0-alpha.1.329+2d56d750617295
I0105 06:54:59.300] Validate output:
I0105 06:54:59.585] NAME                 STATUS    MESSAGE             ERROR
I0105 06:54:59.585] scheduler            Healthy   ok                  
I0105 06:54:59.585] etcd-0               Healthy   {"health":"true"}   
I0105 06:54:59.586] controller-manager   Healthy   ok                  
I0105 06:54:59.586] etcd-1               Healthy   {"health":"true"}   
I0105 06:54:59.594] Cluster validation succeeded
W0105 06:54:59.695] Done, listing cluster services:
... skipping 219 lines ...
W0105 06:57:30.428] Trying to find master named 'kubemark-5000-kubemark-master'
W0105 06:57:30.428] Looking for address 'kubemark-5000-kubemark-master-ip'
W0105 06:57:31.355] Looking for address 'kubemark-5000-kubemark-master-internal-ip'
I0105 06:57:32.272] Waiting up to 300 seconds for cluster initialization.
I0105 06:57:32.272] 
I0105 06:57:32.272]   This will continually check to see if the API for kubernetes is reachable.
I0105 06:57:32.272]   This may time out if there was some uncaught error during start up.
I0105 06:57:32.273] 
W0105 06:57:32.373] Using master: kubemark-5000-kubemark-master (external IP: 35.237.157.213; internal IP: 10.40.3.216)
I0105 06:58:04.280] ...........Kubernetes cluster created.
I0105 06:58:04.485] Cluster "kubemark-scalability-testing_kubemark-5000-kubemark" set.
I0105 06:58:04.690] User "kubemark-scalability-testing_kubemark-5000-kubemark" set.
I0105 06:58:04.872] Context "kubemark-scalability-testing_kubemark-5000-kubemark" created.
... skipping 19 lines ...
I0105 06:58:33.466] Found 0 Nodes, allowing additional 2 iterations for other Nodes to join.
I0105 06:58:33.467] Waiting for 1 ready nodes. 0 ready nodes, 1 registered. Retrying.
I0105 06:58:48.778] Found 1 node(s).
I0105 06:58:49.063] NAME                            STATUS                     ROLES    AGE   VERSION
I0105 06:58:49.064] kubemark-5000-kubemark-master   Ready,SchedulingDisabled   <none>   24s   v1.18.0-alpha.1.329+2d56d750617295
I0105 06:58:49.394] Validate output:
I0105 06:58:49.696] NAME                 STATUS    MESSAGE             ERROR
I0105 06:58:49.696] scheduler            Healthy   ok                  
I0105 06:58:49.696] controller-manager   Healthy   ok                  
I0105 06:58:49.696] etcd-0               Healthy   {"health":"true"}   
I0105 06:58:49.696] etcd-1               Healthy   {"health":"true"}   
I0105 06:58:49.705] Cluster validation succeeded
W0105 06:58:49.806] Done, listing cluster services:
... skipping 5147 lines ...
W0105 07:04:51.358] I0105 07:04:51.358431   28867 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/prometheus-serviceMonitor.yaml
W0105 07:04:51.397] I0105 07:04:51.396764   28867 prometheus.go:201] Exposing kube-apiserver metrics in kubemark cluster
W0105 07:04:51.549] I0105 07:04:51.548166   28867 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-endpoints.yaml
W0105 07:04:51.587] I0105 07:04:51.586947   28867 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-service.yaml
W0105 07:04:51.627] I0105 07:04:51.627231   28867 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-serviceMonitor.yaml
W0105 07:04:51.667] I0105 07:04:51.666932   28867 prometheus.go:277] Waiting for Prometheus stack to become healthy...
W0105 07:05:21.706] W0105 07:05:21.705602   28867 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0105 07:05:51.705] W0105 07:05:51.704838   28867 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0105 07:06:21.705] W0105 07:06:21.704849   28867 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0105 07:06:51.706] W0105 07:06:51.705660   28867 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0105 07:07:21.708] W0105 07:07:21.707801   28867 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0105 07:07:51.705] W0105 07:07:51.705242   28867 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0105 07:08:21.705] W0105 07:08:21.704876   28867 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0105 07:08:51.706] W0105 07:08:51.705818   28867 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0105 07:09:21.706] W0105 07:09:21.706254   28867 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0105 07:09:51.706] W0105 07:09:51.706626   28867 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0105 07:10:21.705] W0105 07:10:21.705180   28867 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0105 07:10:51.707] W0105 07:10:51.707506   28867 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0105 07:11:21.706] W0105 07:11:21.706494   28867 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0105 07:11:51.706] W0105 07:11:51.705851   28867 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0105 07:12:21.706] W0105 07:12:21.706132   28867 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0105 07:12:51.705] W0105 07:12:51.705698   28867 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0105 07:13:21.706] W0105 07:13:21.705887   28867 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0105 07:13:51.706] W0105 07:13:51.706655   28867 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0105 07:14:21.707] W0105 07:14:21.706696   28867 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0105 07:14:51.709] W0105 07:14:51.709081   28867 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0105 07:15:21.706] W0105 07:15:21.705762   28867 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0105 07:15:51.706] W0105 07:15:51.706181   28867 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0105 07:16:21.705] W0105 07:16:21.705548   28867 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0105 07:16:51.706] W0105 07:16:51.706354   28867 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0105 07:17:21.707] W0105 07:17:21.706682   28867 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0105 07:17:51.706] W0105 07:17:51.706536   28867 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0105 07:18:21.705] W0105 07:18:21.705011   28867 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0105 07:18:51.706] W0105 07:18:51.706353   28867 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0105 07:19:21.707] W0105 07:19:21.707486   28867 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0105 07:19:51.707] W0105 07:19:51.706873   28867 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0105 07:19:51.743] W0105 07:19:51.743171   28867 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0105 07:19:51.743] I0105 07:19:51.743200   28867 prometheus.go:325] Dumping monitoring/prometheus-k8s events...
W0105 07:19:51.780] I0105 07:19:51.780081   28867 prometheus.go:336] {
W0105 07:19:51.781]   "metadata": {
W0105 07:19:51.781]     "selfLink": "/api/v1/namespaces/monitoring/events",
W0105 07:19:51.781]     "resourceVersion": "74633"
W0105 07:19:51.781]   },
... skipping 57 lines ...
W0105 07:19:51.793]       "eventTime": null,
W0105 07:19:51.793]       "reportingComponent": "",
W0105 07:19:51.793]       "reportingInstance": ""
W0105 07:19:51.793]     }
W0105 07:19:51.793]   ]
W0105 07:19:51.793] }
W0105 07:19:51.794] F0105 07:19:51.780113   28867 clusterloader.go:248] Error while setting up prometheus stack: timed out waiting for the condition
W0105 07:19:51.813] 2020/01/05 07:19:51 process.go:155: Step '/go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1213713525743030272 --nodes=5000 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml --testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml --testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml --testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml --testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml --testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml' finished in 15m34.259326852s
W0105 07:19:51.814] 2020/01/05 07:19:51 e2e.go:531: Dumping logs from nodes to GCS directly at path: gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1213713525743030272/artifacts
W0105 07:19:51.814] 2020/01/05 07:19:51 process.go:153: Running: ./cluster/log-dump/log-dump.sh /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1213713525743030272/artifacts
W0105 07:19:51.814] 2020/01/05 07:19:51 process.go:153: Running: ./test/kubemark/master-log-dump.sh /workspace/_artifacts
W0105 07:19:51.910] Trying to find master named 'kubemark-5000-master'
W0105 07:19:51.911] Looking for address 'kubemark-5000-master-internal-ip'
... skipping 22 lines ...
W0105 07:20:33.654] 
W0105 07:20:33.654] Specify --start=47713 in the next get-serial-port-output invocation to get only the new output starting from here.
W0105 07:20:39.912] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0105 07:20:39.981] scp: /var/log/fluentd.log*: No such file or directory
W0105 07:20:39.982] scp: /var/log/kubelet.cov*: No such file or directory
W0105 07:20:39.982] scp: /var/log/startupscript.log*: No such file or directory
W0105 07:20:39.987] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0105 07:20:40.091] Dumping logs from nodes to GCS directly at 'gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1213713525743030272/artifacts' using logexporter
I0105 07:20:40.091] Detecting nodes in the cluster
I0105 07:20:44.921] namespace/logexporter created
I0105 07:20:44.958] secret/google-service-account created
I0105 07:20:44.995] daemonset.apps/logexporter created
W0105 07:20:46.071] CommandException: One or more URLs matched no objects.
W0105 07:21:02.396] CommandException: One or more URLs matched no objects.
W0105 07:21:08.780] scp: /var/log/glbc.log*: No such file or directory
W0105 07:21:08.780] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0105 07:21:08.849] scp: /var/log/fluentd.log*: No such file or directory
W0105 07:21:08.849] scp: /var/log/kubelet.cov*: No such file or directory
W0105 07:21:08.849] scp: /var/log/startupscript.log*: No such file or directory
W0105 07:21:08.855] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0105 07:21:08.951] 2020/01/05 07:21:08 process.go:155: Step './test/kubemark/master-log-dump.sh /workspace/_artifacts' finished in 1m17.137807247s
W0105 07:21:08.951] 2020/01/05 07:21:08 process.go:153: Running: ./test/kubemark/stop-kubemark.sh
I0105 07:21:09.052] Skipping dumping of node logs
I0105 07:21:18.977] Successfully listed marker files for successful nodes
I0105 07:21:35.177] Successfully listed marker files for successful nodes
I0105 07:21:35.622] Fetching logs from logexporter-27ppx running on kubemark-5000-minion-group-8xkr
... skipping 237 lines ...
W0105 07:28:59.386] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/regions/us-east1/routers/kubemark-5000-nat-router].
I0105 07:29:00.329] Deleting firewall rules remaining in network kubemark-5000: kubemark-5000-kubemark-default-internal-master
I0105 07:29:00.329] kubemark-5000-kubemark-default-internal-node
I0105 07:29:00.329] kubemark-5000-kubemark-master-etcd
I0105 07:29:00.329] kubemark-5000-kubemark-master-https
I0105 07:29:00.329] kubemark-5000-kubemark-minion-all
W0105 07:29:04.555] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W0105 07:29:04.556]  - The resource 'projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-master-etcd' is not ready
W0105 07:29:04.556] 
W0105 07:29:06.022] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W0105 07:29:06.023]  - The resource 'projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-master-https' is not ready
W0105 07:29:06.023] 
W0105 07:29:06.269] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W0105 07:29:06.269]  - The resource 'projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-minion-all' is not ready
W0105 07:29:06.269] 
W0105 07:29:09.762] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-master-https].
W0105 07:29:09.775] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-default-internal-master].
W0105 07:29:10.564] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-default-internal-node].
W0105 07:29:10.652] Failed to delete firewall rules.
W0105 07:29:11.228] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-master-etcd].
I0105 07:29:11.555] Deleting custom subnet...
W0105 07:29:11.808] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-minion-all].
W0105 07:29:12.925] ERROR: (gcloud.compute.networks.subnets.delete) Could not fetch resource:
W0105 07:29:12.925]  - The subnetwork resource 'projects/kubemark-scalability-testing/regions/us-east1/subnetworks/kubemark-5000-custom-subnet' is already being used by 'projects/kubemark-scalability-testing/regions/us-east1/addresses/kubemark-5000-kubemark-master-internal-ip'
W0105 07:29:12.925] 
W0105 07:29:16.544] ERROR: (gcloud.compute.networks.delete) Could not fetch resource:
W0105 07:29:16.544]  - The network resource 'projects/kubemark-scalability-testing/global/networks/kubemark-5000' is already being used by 'projects/kubemark-scalability-testing/regions/us-east1/subnetworks/kubemark-5000-custom-subnet'
W0105 07:29:16.544] 
I0105 07:29:16.645] Failed to delete network 'kubemark-5000'. Listing firewall-rules:
W0105 07:29:17.486] 
W0105 07:29:17.486] To show all fields of the firewall, please show in JSON format: --format=json
W0105 07:29:17.487] To show all fields in table format, please see the examples in --help.
W0105 07:29:17.487] 
W0105 07:29:17.724] W0105 07:29:17.723873   36609 loader.go:223] Config not found: /go/src/k8s.io/kubernetes/kubernetes/test/kubemark/resources/kubeconfig.kubemark
W0105 07:29:17.902] W0105 07:29:17.902155   36657 loader.go:223] Config not found: /go/src/k8s.io/kubernetes/kubernetes/test/kubemark/resources/kubeconfig.kubemark
... skipping 18 lines ...
I0105 07:29:32.640] Property "users.kubemark-scalability-testing_kubemark-5000-kubemark-basic-auth" unset.
I0105 07:29:32.808] Property "contexts.kubemark-scalability-testing_kubemark-5000-kubemark" unset.
I0105 07:29:32.814] Cleared config for kubemark-scalability-testing_kubemark-5000-kubemark from /workspace/.kube/config
I0105 07:29:32.815] Done
W0105 07:29:32.841] 2020/01/05 07:29:32 process.go:155: Step './test/kubemark/stop-kubemark.sh' finished in 8m23.868590759s
W0105 07:29:32.841] 2020/01/05 07:29:32 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0105 07:29:32.842] 2020/01/05 07:29:32 main.go:319: Something went wrong: encountered 1 errors: [error during /go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1213713525743030272 --nodes=5000 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml --testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml --testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml --testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml --testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml --testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml: exit status 1]
W0105 07:29:32.842] Traceback (most recent call last):
W0105 07:29:32.843]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 778, in <module>
W0105 07:29:32.843]     main(parse_args())
W0105 07:29:32.843]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 626, in main
W0105 07:29:32.843]     mode.start(runner_args)
W0105 07:29:32.844]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0105 07:29:32.844]     check_env(env, self.command, *args)
W0105 07:29:32.844]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0105 07:29:32.844]     subprocess.check_call(cmd, env=env)
W0105 07:29:32.844]   File "/usr/lib/python2.7/subprocess.py", line 190, in check_call
W0105 07:29:32.845]     raise CalledProcessError(retcode, cmd)
W0105 07:29:32.846] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--provider=gce', '--cluster=kubemark-5000', '--gcp-network=kubemark-5000', '--extract=ci/latest', '--gcp-node-image=gci', '--gcp-node-size=n1-standard-8', '--gcp-nodes=84', '--gcp-project=kubemark-scalability-testing', '--gcp-zone=us-east1-b', '--kubemark', '--kubemark-nodes=5000', '--test_args=--ginkgo.focus=xxxx', '--test-cmd=/go/src/k8s.io/perf-tests/run-e2e.sh', '--test-cmd-args=cluster-loader2', '--test-cmd-args=--experimental-gcp-snapshot-prometheus-disk=true', '--test-cmd-args=--experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1213713525743030272', '--test-cmd-args=--nodes=5000', '--test-cmd-args=--provider=kubemark', '--test-cmd-args=--report-dir=/workspace/_artifacts', '--test-cmd-args=--testconfig=testing/density/config.yaml', '--test-cmd-args=--testconfig=testing/load/config.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/enable_restart_count_check.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml', '--test-cmd-name=ClusterLoaderV2', '--timeout=1080m', '--logexporter-gcs-path=gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1213713525743030272/artifacts')' returned non-zero exit status 1
E0105 07:29:32.846] Command failed
I0105 07:29:32.847] process 509 exited with code 1 after 40.6m
E0105 07:29:32.847] FAIL: ci-kubernetes-kubemark-gce-scale
I0105 07:29:32.847] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0105 07:29:33.364] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0105 07:29:33.413] process 37195 exited with code 0 after 0.0m
I0105 07:29:33.414] Call:  gcloud config get-value account
I0105 07:29:33.753] process 37208 exited with code 0 after 0.0m
I0105 07:29:33.753] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
... skipping 21 lines ...