This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 17 succeeded
Started2020-01-06 06:51
Elapsed43m11s
Revision
Buildergke-prow-ssd-pool-1a225945-577q
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/8795796d-7983-418f-b9a4-80dae4c3568a/targets/test'}}
podb1f8e8ee-3050-11ea-a07b-c6eb1bf16817
resultstorehttps://source.cloud.google.com/results/invocations/8795796d-7983-418f-b9a4-80dae4c3568a/targets/test
infra-commit6b50e8c07
job-versionv1.18.0-alpha.1.341+0f61791bc7330c
podb1f8e8ee-3050-11ea-a07b-c6eb1bf16817
repok8s.io/kubernetes
repo-commit0f61791bc7330c8402e6b9fe93a86ad064607f77
repos{u'k8s.io/kubernetes': u'master', u'k8s.io/perf-tests': u'master'}
revisionv1.18.0-alpha.1.341+0f61791bc7330c

Test Failures


ClusterLoaderV2 15m31s

error during /go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1214076682021900288 --nodes=5000 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml --testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml --testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml --testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml --testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml --testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 17 Passed Tests

Error lines from build-log.txt

... skipping 427 lines ...
W0106 06:56:47.894] Trying to find master named 'kubemark-5000-master'
W0106 06:56:47.895] Looking for address 'kubemark-5000-master-ip'
W0106 06:56:48.785] Looking for address 'kubemark-5000-master-internal-ip'
I0106 06:56:49.722] Waiting up to 300 seconds for cluster initialization.
I0106 06:56:49.723] 
I0106 06:56:49.723]   This will continually check to see if the API for kubernetes is reachable.
I0106 06:56:49.723]   This may time out if there was some uncaught error during start up.
I0106 06:56:49.723] 
W0106 06:56:49.824] Using master: kubemark-5000-master (external IP: 35.243.232.192; internal IP: 10.40.0.2)
I0106 06:56:49.925] Kubernetes cluster created.
I0106 06:56:50.037] Cluster "kubemark-scalability-testing_kubemark-5000" set.
I0106 06:56:50.208] User "kubemark-scalability-testing_kubemark-5000" set.
I0106 06:56:50.385] Context "kubemark-scalability-testing_kubemark-5000" created.
... skipping 102 lines ...
I0106 06:57:45.954] kubemark-5000-minion-group-x5np   Ready                      <none>   22s   v1.18.0-alpha.1.341+0f61791bc7330c
I0106 06:57:45.955] kubemark-5000-minion-group-x695   Ready                      <none>   24s   v1.18.0-alpha.1.341+0f61791bc7330c
I0106 06:57:45.955] kubemark-5000-minion-group-xfgg   Ready                      <none>   20s   v1.18.0-alpha.1.341+0f61791bc7330c
I0106 06:57:45.955] kubemark-5000-minion-group-xl20   Ready                      <none>   23s   v1.18.0-alpha.1.341+0f61791bc7330c
I0106 06:57:45.955] kubemark-5000-minion-heapster     Ready                      <none>   36s   v1.18.0-alpha.1.341+0f61791bc7330c
I0106 06:57:46.235] Validate output:
I0106 06:57:46.492] NAME                 STATUS    MESSAGE             ERROR
I0106 06:57:46.492] controller-manager   Healthy   ok                  
I0106 06:57:46.492] etcd-1               Healthy   {"health":"true"}   
I0106 06:57:46.492] scheduler            Healthy   ok                  
I0106 06:57:46.492] etcd-0               Healthy   {"health":"true"}   
I0106 06:57:46.498] Cluster validation succeeded
W0106 06:57:46.599] Done, listing cluster services:
... skipping 219 lines ...
W0106 07:00:27.018] Trying to find master named 'kubemark-5000-kubemark-master'
W0106 07:00:27.018] Looking for address 'kubemark-5000-kubemark-master-ip'
W0106 07:00:27.857] Looking for address 'kubemark-5000-kubemark-master-internal-ip'
I0106 07:00:28.720] Waiting up to 300 seconds for cluster initialization.
I0106 07:00:28.721] 
I0106 07:00:28.721]   This will continually check to see if the API for kubernetes is reachable.
I0106 07:00:28.721]   This may time out if there was some uncaught error during start up.
I0106 07:00:28.721] 
I0106 07:00:53.565] .........Kubernetes cluster created.
W0106 07:00:53.667] Using master: kubemark-5000-kubemark-master (external IP: 35.237.157.213; internal IP: 10.40.3.216)
I0106 07:00:53.894] Cluster "kubemark-scalability-testing_kubemark-5000-kubemark" set.
I0106 07:00:54.042] User "kubemark-scalability-testing_kubemark-5000-kubemark" set.
I0106 07:00:54.194] Context "kubemark-scalability-testing_kubemark-5000-kubemark" created.
... skipping 23 lines ...
I0106 07:01:34.249] NAME                            STATUS                        ROLES    AGE   VERSION
I0106 07:01:34.249] kubemark-5000-kubemark-master   NotReady,SchedulingDisabled   <none>   19s   v1.18.0-alpha.1.341+0f61791bc7330c
I0106 07:01:34.255] Found 1 node(s).
I0106 07:01:34.528] NAME                            STATUS                        ROLES    AGE   VERSION
I0106 07:01:34.529] kubemark-5000-kubemark-master   NotReady,SchedulingDisabled   <none>   19s   v1.18.0-alpha.1.341+0f61791bc7330c
I0106 07:01:34.848] Validate output:
I0106 07:01:35.128] NAME                 STATUS    MESSAGE             ERROR
I0106 07:01:35.128] scheduler            Healthy   ok                  
I0106 07:01:35.128] controller-manager   Healthy   ok                  
I0106 07:01:35.128] etcd-0               Healthy   {"health":"true"}   
I0106 07:01:35.128] etcd-1               Healthy   {"health":"true"}   
I0106 07:01:35.134] Cluster validation encountered some problems, but cluster should be in working order
W0106 07:01:35.235] ...ignoring non-fatal errors in validate-cluster
W0106 07:01:35.235] Done, listing cluster services:
W0106 07:01:35.236] 
I0106 07:01:35.403] Kubernetes master is running at https://35.237.157.213
I0106 07:01:35.403] 
I0106 07:01:35.404] To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
I0106 07:01:35.409] 
... skipping 5143 lines ...
W0106 07:07:37.409] I0106 07:07:37.409760   28991 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/prometheus-serviceMonitor.yaml
W0106 07:07:37.456] I0106 07:07:37.456154   28991 prometheus.go:201] Exposing kube-apiserver metrics in kubemark cluster
W0106 07:07:37.602] I0106 07:07:37.602663   28991 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-endpoints.yaml
W0106 07:07:37.642] I0106 07:07:37.642193   28991 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-service.yaml
W0106 07:07:37.679] I0106 07:07:37.679722   28991 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-serviceMonitor.yaml
W0106 07:07:37.718] I0106 07:07:37.718042   28991 prometheus.go:277] Waiting for Prometheus stack to become healthy...
W0106 07:08:07.756] W0106 07:08:07.755800   28991 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0106 07:08:37.757] W0106 07:08:37.756700   28991 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0106 07:09:07.756] W0106 07:09:07.756607   28991 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0106 07:09:37.758] W0106 07:09:37.757880   28991 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0106 07:10:07.760] W0106 07:10:07.760196   28991 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0106 07:10:37.756] W0106 07:10:37.756426   28991 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0106 07:11:07.756] W0106 07:11:07.756469   28991 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0106 07:11:37.759] W0106 07:11:37.757644   28991 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0106 07:12:07.756] W0106 07:12:07.756650   28991 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0106 07:12:37.757] W0106 07:12:37.757453   28991 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0106 07:13:07.757] W0106 07:13:07.756990   28991 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0106 07:13:37.758] W0106 07:13:37.757888   28991 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0106 07:14:07.757] W0106 07:14:07.756839   28991 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0106 07:14:37.757] W0106 07:14:37.756797   28991 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0106 07:15:07.757] W0106 07:15:07.757503   28991 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0106 07:15:37.760] W0106 07:15:37.760607   28991 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0106 07:16:07.761] W0106 07:16:07.761218   28991 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0106 07:16:37.757] W0106 07:16:37.757145   28991 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0106 07:17:07.757] W0106 07:17:07.757100   28991 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0106 07:17:37.757] W0106 07:17:37.756805   28991 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0106 07:18:07.756] W0106 07:18:07.756238   28991 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0106 07:18:37.756] W0106 07:18:37.756552   28991 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0106 07:19:07.757] W0106 07:19:07.756821   28991 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0106 07:19:37.756] W0106 07:19:37.756720   28991 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0106 07:20:07.757] W0106 07:20:07.756962   28991 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0106 07:20:37.757] W0106 07:20:37.756742   28991 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0106 07:21:07.757] W0106 07:21:07.757542   28991 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0106 07:21:37.758] W0106 07:21:37.758132   28991 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0106 07:22:07.757] W0106 07:22:07.757580   28991 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0106 07:22:37.756] W0106 07:22:37.756441   28991 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0106 07:22:37.795] W0106 07:22:37.794797   28991 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0106 07:22:37.795] I0106 07:22:37.794825   28991 prometheus.go:325] Dumping monitoring/prometheus-k8s events...
W0106 07:22:37.832] I0106 07:22:37.831182   28991 prometheus.go:336] {
W0106 07:22:37.832]   "metadata": {
W0106 07:22:37.832]     "selfLink": "/api/v1/namespaces/monitoring/events",
W0106 07:22:37.833]     "resourceVersion": "74622"
W0106 07:22:37.833]   },
... skipping 57 lines ...
W0106 07:22:37.844]       "eventTime": null,
W0106 07:22:37.844]       "reportingComponent": "",
W0106 07:22:37.844]       "reportingInstance": ""
W0106 07:22:37.844]     }
W0106 07:22:37.844]   ]
W0106 07:22:37.844] }
W0106 07:22:37.844] F0106 07:22:37.831210   28991 clusterloader.go:248] Error while setting up prometheus stack: timed out waiting for the condition
W0106 07:22:37.864] 2020/01/06 07:22:37 process.go:155: Step '/go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1214076682021900288 --nodes=5000 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml --testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml --testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml --testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml --testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml --testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml' finished in 15m31.881507836s
W0106 07:22:37.865] 2020/01/06 07:22:37 e2e.go:531: Dumping logs from nodes to GCS directly at path: gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1214076682021900288/artifacts
W0106 07:22:37.865] 2020/01/06 07:22:37 process.go:153: Running: ./cluster/log-dump/log-dump.sh /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1214076682021900288/artifacts
W0106 07:22:37.867] 2020/01/06 07:22:37 process.go:153: Running: ./test/kubemark/master-log-dump.sh /workspace/_artifacts
W0106 07:22:37.948] Trying to find master named 'kubemark-5000-master'
W0106 07:22:37.949] Looking for address 'kubemark-5000-master-internal-ip'
... skipping 22 lines ...
W0106 07:23:17.249] 
W0106 07:23:17.249] Specify --start=47729 in the next get-serial-port-output invocation to get only the new output starting from here.
W0106 07:23:23.314] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0106 07:23:23.383] scp: /var/log/fluentd.log*: No such file or directory
W0106 07:23:23.384] scp: /var/log/kubelet.cov*: No such file or directory
W0106 07:23:23.385] scp: /var/log/startupscript.log*: No such file or directory
W0106 07:23:23.392] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0106 07:23:23.493] Dumping logs from nodes to GCS directly at 'gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1214076682021900288/artifacts' using logexporter
I0106 07:23:23.493] Detecting nodes in the cluster
I0106 07:23:27.609] namespace/logexporter created
I0106 07:23:27.647] secret/google-service-account created
I0106 07:23:27.687] daemonset.apps/logexporter created
W0106 07:23:28.622] CommandException: One or more URLs matched no objects.
W0106 07:23:44.596] CommandException: One or more URLs matched no objects.
W0106 07:23:52.018] scp: /var/log/glbc.log*: No such file or directory
W0106 07:23:52.019] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0106 07:23:52.088] scp: /var/log/fluentd.log*: No such file or directory
W0106 07:23:52.089] scp: /var/log/kubelet.cov*: No such file or directory
W0106 07:23:52.089] scp: /var/log/startupscript.log*: No such file or directory
W0106 07:23:52.093] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0106 07:23:52.179] 2020/01/06 07:23:52 process.go:155: Step './test/kubemark/master-log-dump.sh /workspace/_artifacts' finished in 1m14.312004206s
W0106 07:23:52.179] 2020/01/06 07:23:52 process.go:153: Running: ./test/kubemark/stop-kubemark.sh
I0106 07:23:52.280] Skipping dumping of node logs
I0106 07:24:01.021] Successfully listed marker files for successful nodes
I0106 07:24:17.072] Successfully listed marker files for successful nodes
I0106 07:24:17.536] Fetching logs from logexporter-2l6qv running on kubemark-5000-minion-group-1qv8
... skipping 238 lines ...
I0106 07:32:35.789] kubemark-5000-kubemark-master-etcd
I0106 07:32:35.789] kubemark-5000-kubemark-master-https
I0106 07:32:35.789] kubemark-5000-kubemark-minion-all
W0106 07:32:41.583] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/zones/us-east1-b/instances/kubemark-5000-kubemark-master].
W0106 07:32:44.668] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-default-internal-master].
W0106 07:32:45.524] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-default-internal-node].
W0106 07:32:45.640] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W0106 07:32:45.640]  - The resource 'projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-master-https' is not ready
W0106 07:32:45.641] 
W0106 07:32:46.266] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-master-etcd].
W0106 07:32:46.873] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W0106 07:32:46.873]  - The resource 'projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-minion-all' was not found
W0106 07:32:46.873] 
W0106 07:32:46.951] Failed to delete firewall rules.
W0106 07:32:47.307] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-master-https].
W0106 07:32:47.873] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-minion-all].
I0106 07:32:48.774] Deleting custom subnet...
W0106 07:32:49.886] ERROR: (gcloud.compute.networks.subnets.delete) Could not fetch resource:
W0106 07:32:49.886]  - The subnetwork resource 'projects/kubemark-scalability-testing/regions/us-east1/subnetworks/kubemark-5000-custom-subnet' is already being used by 'projects/kubemark-scalability-testing/regions/us-east1/addresses/kubemark-5000-kubemark-master-internal-ip'
W0106 07:32:49.887] 
W0106 07:32:52.677] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/regions/us-east1/addresses/kubemark-5000-kubemark-master-ip].
W0106 07:32:53.338] ERROR: (gcloud.compute.networks.delete) Could not fetch resource:
W0106 07:32:53.338]  - The network resource 'projects/kubemark-scalability-testing/global/networks/kubemark-5000' is already being used by 'projects/kubemark-scalability-testing/regions/us-east1/subnetworks/kubemark-5000-custom-subnet'
W0106 07:32:53.339] 
I0106 07:32:53.439] Failed to delete network 'kubemark-5000'. Listing firewall-rules:
W0106 07:32:54.100] 
W0106 07:32:54.100] To show all fields of the firewall, please show in JSON format: --format=json
W0106 07:32:54.100] To show all fields in table format, please see the examples in --help.
W0106 07:32:54.100] 
W0106 07:32:54.314] W0106 07:32:54.313660   36756 loader.go:223] Config not found: /go/src/k8s.io/kubernetes/kubernetes/test/kubemark/resources/kubeconfig.kubemark
W0106 07:32:54.458] W0106 07:32:54.458617   36806 loader.go:223] Config not found: /go/src/k8s.io/kubernetes/kubernetes/test/kubemark/resources/kubeconfig.kubemark
... skipping 17 lines ...
I0106 07:33:03.983] Property "users.kubemark-scalability-testing_kubemark-5000-kubemark-basic-auth" unset.
I0106 07:33:04.142] Property "contexts.kubemark-scalability-testing_kubemark-5000-kubemark" unset.
I0106 07:33:04.148] Cleared config for kubemark-scalability-testing_kubemark-5000-kubemark from /workspace/.kube/config
I0106 07:33:04.148] Done
W0106 07:33:04.177] 2020/01/06 07:33:04 process.go:155: Step './test/kubemark/stop-kubemark.sh' finished in 9m11.972739025s
W0106 07:33:04.178] 2020/01/06 07:33:04 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0106 07:33:04.179] 2020/01/06 07:33:04 main.go:319: Something went wrong: encountered 1 errors: [error during /go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1214076682021900288 --nodes=5000 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml --testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml --testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml --testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml --testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml --testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml: exit status 1]
W0106 07:33:04.179] Traceback (most recent call last):
W0106 07:33:04.179]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 778, in <module>
W0106 07:33:04.180]     main(parse_args())
W0106 07:33:04.180]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 626, in main
W0106 07:33:04.180]     mode.start(runner_args)
W0106 07:33:04.180]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0106 07:33:04.181]     check_env(env, self.command, *args)
W0106 07:33:04.181]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0106 07:33:04.181]     subprocess.check_call(cmd, env=env)
W0106 07:33:04.181]   File "/usr/lib/python2.7/subprocess.py", line 190, in check_call
W0106 07:33:04.181]     raise CalledProcessError(retcode, cmd)
W0106 07:33:04.183] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--provider=gce', '--cluster=kubemark-5000', '--gcp-network=kubemark-5000', '--extract=ci/latest', '--gcp-node-image=gci', '--gcp-node-size=n1-standard-8', '--gcp-nodes=84', '--gcp-project=kubemark-scalability-testing', '--gcp-zone=us-east1-b', '--kubemark', '--kubemark-nodes=5000', '--test_args=--ginkgo.focus=xxxx', '--test-cmd=/go/src/k8s.io/perf-tests/run-e2e.sh', '--test-cmd-args=cluster-loader2', '--test-cmd-args=--experimental-gcp-snapshot-prometheus-disk=true', '--test-cmd-args=--experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1214076682021900288', '--test-cmd-args=--nodes=5000', '--test-cmd-args=--provider=kubemark', '--test-cmd-args=--report-dir=/workspace/_artifacts', '--test-cmd-args=--testconfig=testing/density/config.yaml', '--test-cmd-args=--testconfig=testing/load/config.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/enable_restart_count_check.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml', '--test-cmd-name=ClusterLoaderV2', '--timeout=1080m', '--logexporter-gcs-path=gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1214076682021900288/artifacts')' returned non-zero exit status 1
E0106 07:33:04.183] Command failed
I0106 07:33:04.184] process 507 exited with code 1 after 40.6m
E0106 07:33:04.184] FAIL: ci-kubernetes-kubemark-gce-scale
I0106 07:33:04.184] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0106 07:33:04.678] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0106 07:33:04.728] process 37318 exited with code 0 after 0.0m
I0106 07:33:04.729] Call:  gcloud config get-value account
I0106 07:33:05.042] process 37331 exited with code 0 after 0.0m
I0106 07:33:05.042] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
... skipping 21 lines ...