This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 17 succeeded
Started2020-01-04 18:46
Elapsed45m45s
Revision
Buildergke-prow-ssd-pool-1a225945-nk4t
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/a931b7db-adbc-4087-9b39-fb7f578140ae/targets/test'}}
pod635b350a-2f22-11ea-a07b-c6eb1bf16817
resultstorehttps://source.cloud.google.com/results/invocations/a931b7db-adbc-4087-9b39-fb7f578140ae/targets/test
infra-commitfa0e0f711
job-versionv1.18.0-alpha.1.324+3a1d076766d7b2
pod635b350a-2f22-11ea-a07b-c6eb1bf16817
repok8s.io/kubernetes
repo-commit3a1d076766d7b281322fbfd4a4a6e5dc5e72d5eb
repos{u'k8s.io/kubernetes': u'master', u'k8s.io/perf-tests': u'master'}
revisionv1.18.0-alpha.1.324+3a1d076766d7b2

Test Failures


ClusterLoaderV2 15m37s

error during /go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1213532076272259074 --nodes=5000 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml --testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml --testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml --testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml --testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml --testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 17 Passed Tests

Error lines from build-log.txt

... skipping 429 lines ...
W0104 18:53:38.615] Trying to find master named 'kubemark-5000-master'
W0104 18:53:38.615] Looking for address 'kubemark-5000-master-ip'
W0104 18:53:39.869] Looking for address 'kubemark-5000-master-internal-ip'
I0104 18:53:41.184] Waiting up to 300 seconds for cluster initialization.
I0104 18:53:41.184] 
I0104 18:53:41.184]   This will continually check to see if the API for kubernetes is reachable.
I0104 18:53:41.184]   This may time out if there was some uncaught error during start up.
I0104 18:53:41.184] 
W0104 18:53:41.285] Using master: kubemark-5000-master (external IP: 35.231.150.150; internal IP: 10.40.0.2)
I0104 18:53:41.385] Kubernetes cluster created.
I0104 18:53:41.658] Cluster "kubemark-scalability-testing_kubemark-5000" set.
I0104 18:53:41.923] User "kubemark-scalability-testing_kubemark-5000" set.
I0104 18:53:42.210] Context "kubemark-scalability-testing_kubemark-5000" created.
... skipping 103 lines ...
I0104 18:54:41.216] kubemark-5000-minion-group-xts4   Ready                      <none>   26s   v1.18.0-alpha.1.324+3a1d076766d7b2
I0104 18:54:41.216] kubemark-5000-minion-group-xv4l   Ready                      <none>   28s   v1.18.0-alpha.1.324+3a1d076766d7b2
I0104 18:54:41.216] kubemark-5000-minion-group-zb1b   Ready                      <none>   27s   v1.18.0-alpha.1.324+3a1d076766d7b2
I0104 18:54:41.217] kubemark-5000-minion-group-zl96   Ready                      <none>   27s   v1.18.0-alpha.1.324+3a1d076766d7b2
I0104 18:54:41.217] kubemark-5000-minion-heapster     Ready                      <none>   43s   v1.18.0-alpha.1.324+3a1d076766d7b2
I0104 18:54:41.591] Validate output:
I0104 18:54:41.944] NAME                 STATUS    MESSAGE             ERROR
I0104 18:54:41.944] scheduler            Healthy   ok                  
I0104 18:54:41.945] etcd-1               Healthy   {"health":"true"}   
I0104 18:54:41.945] controller-manager   Healthy   ok                  
I0104 18:54:41.945] etcd-0               Healthy   {"health":"true"}   
I0104 18:54:41.961] Cluster validation succeeded
W0104 18:54:42.062] Done, listing cluster services:
... skipping 219 lines ...
W0104 18:57:28.086] Trying to find master named 'kubemark-5000-kubemark-master'
W0104 18:57:28.086] Looking for address 'kubemark-5000-kubemark-master-ip'
W0104 18:57:29.107] Looking for address 'kubemark-5000-kubemark-master-internal-ip'
I0104 18:57:30.148] Waiting up to 300 seconds for cluster initialization.
I0104 18:57:30.149] 
I0104 18:57:30.149]   This will continually check to see if the API for kubernetes is reachable.
I0104 18:57:30.149]   This may time out if there was some uncaught error during start up.
I0104 18:57:30.149] 
I0104 18:58:00.302] ............Kubernetes cluster created.
W0104 18:58:00.403] Using master: kubemark-5000-kubemark-master (external IP: 35.243.232.192; internal IP: 10.40.3.216)
I0104 18:58:00.534] Cluster "kubemark-scalability-testing_kubemark-5000-kubemark" set.
I0104 18:58:00.767] User "kubemark-scalability-testing_kubemark-5000-kubemark" set.
I0104 18:58:01.002] Context "kubemark-scalability-testing_kubemark-5000-kubemark" created.
... skipping 19 lines ...
I0104 18:58:26.729] Found 0 Nodes, allowing additional 2 iterations for other Nodes to join.
I0104 18:58:26.729] Waiting for 1 ready nodes. 0 ready nodes, 1 registered. Retrying.
I0104 18:58:42.105] Found 1 node(s).
I0104 18:58:42.461] NAME                            STATUS                     ROLES    AGE   VERSION
I0104 18:58:42.461] kubemark-5000-kubemark-master   Ready,SchedulingDisabled   <none>   21s   v1.18.0-alpha.1.324+3a1d076766d7b2
I0104 18:58:42.881] Validate output:
I0104 18:58:43.218] NAME                 STATUS    MESSAGE             ERROR
I0104 18:58:43.218] controller-manager   Healthy   ok                  
I0104 18:58:43.219] scheduler            Healthy   ok                  
I0104 18:58:43.219] etcd-1               Healthy   {"health":"true"}   
I0104 18:58:43.219] etcd-0               Healthy   {"health":"true"}   
I0104 18:58:43.229] Cluster validation succeeded
W0104 18:58:43.329] Done, listing cluster services:
... skipping 5148 lines ...
W0104 19:04:51.176] I0104 19:04:51.176585   28477 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/prometheus-serviceMonitor.yaml
W0104 19:04:51.219] I0104 19:04:51.218626   28477 prometheus.go:201] Exposing kube-apiserver metrics in kubemark cluster
W0104 19:04:51.369] I0104 19:04:51.369010   28477 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-endpoints.yaml
W0104 19:04:51.410] I0104 19:04:51.409495   28477 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-service.yaml
W0104 19:04:51.449] I0104 19:04:51.448691   28477 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-serviceMonitor.yaml
W0104 19:04:51.489] I0104 19:04:51.488729   28477 prometheus.go:277] Waiting for Prometheus stack to become healthy...
W0104 19:05:21.531] W0104 19:05:21.530900   28477 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 19:05:51.527] W0104 19:05:51.527359   28477 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 19:06:21.528] W0104 19:06:21.528073   28477 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 19:06:51.527] W0104 19:06:51.527386   28477 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 19:07:21.527] W0104 19:07:21.527701   28477 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 19:07:51.528] W0104 19:07:51.528267   28477 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 19:08:21.528] W0104 19:08:21.527551   28477 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 19:08:51.530] W0104 19:08:51.529737   28477 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 19:09:21.528] W0104 19:09:21.527689   28477 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 19:09:51.528] W0104 19:09:51.527366   28477 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 19:10:21.527] W0104 19:10:21.527599   28477 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 19:10:51.534] W0104 19:10:51.532628   28477 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 19:11:21.531] W0104 19:11:21.530788   28477 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 19:11:51.527] W0104 19:11:51.527554   28477 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 19:12:21.527] W0104 19:12:21.527042   28477 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 19:12:51.528] W0104 19:12:51.528498   28477 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 19:13:21.529] W0104 19:13:21.528566   28477 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 19:13:51.527] W0104 19:13:51.527434   28477 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 19:14:21.528] W0104 19:14:21.528264   28477 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 19:14:51.528] W0104 19:14:51.527899   28477 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 19:15:21.528] W0104 19:15:21.528261   28477 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 19:15:51.535] W0104 19:15:51.534947   28477 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 19:16:21.527] W0104 19:16:21.527648   28477 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 19:16:51.527] W0104 19:16:51.527498   28477 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 19:17:21.527] W0104 19:17:21.527680   28477 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 19:17:51.527] W0104 19:17:51.527415   28477 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 19:18:21.528] W0104 19:18:21.527978   28477 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 19:18:51.527] W0104 19:18:51.527542   28477 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 19:19:21.527] W0104 19:19:21.527651   28477 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 19:19:51.532] W0104 19:19:51.531671   28477 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 19:19:51.569] W0104 19:19:51.568806   28477 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 19:19:51.569] I0104 19:19:51.568840   28477 prometheus.go:325] Dumping monitoring/prometheus-k8s events...
W0104 19:19:51.606] I0104 19:19:51.605743   28477 prometheus.go:336] {
W0104 19:19:51.606]   "metadata": {
W0104 19:19:51.606]     "selfLink": "/api/v1/namespaces/monitoring/events",
W0104 19:19:51.606]     "resourceVersion": "74506"
W0104 19:19:51.606]   },
... skipping 57 lines ...
W0104 19:19:51.622]       "eventTime": null,
W0104 19:19:51.622]       "reportingComponent": "",
W0104 19:19:51.622]       "reportingInstance": ""
W0104 19:19:51.623]     }
W0104 19:19:51.623]   ]
W0104 19:19:51.623] }
W0104 19:19:51.623] F0104 19:19:51.605772   28477 clusterloader.go:248] Error while setting up prometheus stack: timed out waiting for the condition
W0104 19:19:51.648] 2020/01/04 19:19:51 process.go:155: Step '/go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1213532076272259074 --nodes=5000 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml --testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml --testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml --testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml --testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml --testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml' finished in 15m37.31174815s
W0104 19:19:51.649] 2020/01/04 19:19:51 e2e.go:531: Dumping logs from nodes to GCS directly at path: gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1213532076272259074/artifacts
W0104 19:19:51.649] 2020/01/04 19:19:51 process.go:153: Running: ./cluster/log-dump/log-dump.sh /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1213532076272259074/artifacts
W0104 19:19:51.649] 2020/01/04 19:19:51 process.go:153: Running: ./test/kubemark/master-log-dump.sh /workspace/_artifacts
I0104 19:19:51.750] Checking for custom logdump instances, if any
I0104 19:19:51.750] Dumping logs for kubemark master: kubemark-5000-kubemark-master
... skipping 22 lines ...
W0104 19:20:36.936] 
W0104 19:20:36.937] Specify --start=47751 in the next get-serial-port-output invocation to get only the new output starting from here.
W0104 19:20:43.586] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0104 19:20:43.655] scp: /var/log/fluentd.log*: No such file or directory
W0104 19:20:43.660] scp: /var/log/kubelet.cov*: No such file or directory
W0104 19:20:43.661] scp: /var/log/startupscript.log*: No such file or directory
W0104 19:20:43.661] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0104 19:20:43.770] Dumping logs from nodes to GCS directly at 'gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1213532076272259074/artifacts' using logexporter
I0104 19:20:43.770] Detecting nodes in the cluster
I0104 19:20:50.157] namespace/logexporter created
I0104 19:20:50.194] secret/google-service-account created
I0104 19:20:50.234] daemonset.apps/logexporter created
W0104 19:20:51.528] CommandException: One or more URLs matched no objects.
W0104 19:21:08.025] CommandException: One or more URLs matched no objects.
W0104 19:21:12.218] scp: /var/log/glbc.log*: No such file or directory
W0104 19:21:12.219] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0104 19:21:12.288] scp: /var/log/fluentd.log*: No such file or directory
W0104 19:21:12.289] scp: /var/log/kubelet.cov*: No such file or directory
W0104 19:21:12.289] scp: /var/log/startupscript.log*: No such file or directory
W0104 19:21:12.295] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0104 19:21:12.395] Skipping dumping of node logs
W0104 19:21:12.496] 2020/01/04 19:21:12 process.go:155: Step './test/kubemark/master-log-dump.sh /workspace/_artifacts' finished in 1m20.746234535s
W0104 19:21:12.497] 2020/01/04 19:21:12 process.go:153: Running: ./test/kubemark/stop-kubemark.sh
I0104 19:21:24.585] Successfully listed marker files for successful nodes
I0104 19:21:25.182] Fetching logs from logexporter-42d48 running on kubemark-5000-minion-group-gpqq
I0104 19:21:25.187] Fetching logs from logexporter-4k5zc running on kubemark-5000-minion-group-05ds
... skipping 235 lines ...
I0104 19:30:09.099] Deleting firewall rules remaining in network kubemark-5000: kubemark-5000-kubemark-default-internal-master
I0104 19:30:09.100] kubemark-5000-kubemark-default-internal-node
I0104 19:30:09.100] kubemark-5000-kubemark-master-etcd
I0104 19:30:09.100] kubemark-5000-kubemark-master-https
I0104 19:30:09.100] kubemark-5000-kubemark-minion-all
W0104 19:30:15.543] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/zones/us-east1-b/instances/kubemark-5000-kubemark-master].
W0104 19:30:20.669] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W0104 19:30:20.670]  - The resource 'projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-master-https' is not ready
W0104 19:30:20.670] 
W0104 19:30:21.642] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W0104 19:30:21.642]  - The resource 'projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-master-etcd' is not ready
W0104 19:30:21.642] 
W0104 19:30:22.613] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-minion-all].
W0104 19:30:22.761] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-default-internal-master].
W0104 19:30:22.794] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W0104 19:30:22.794]  - The resource 'projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-minion-all' was not found
W0104 19:30:22.795] 
W0104 19:30:22.868] Failed to delete firewall rules.
W0104 19:30:23.539] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-default-internal-node].
W0104 19:30:24.359] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-master-etcd].
W0104 19:30:25.805] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-master-https].
I0104 19:30:26.918] Deleting custom subnet...
W0104 19:30:28.298] ERROR: (gcloud.compute.networks.subnets.delete) Could not fetch resource:
W0104 19:30:28.298]  - The subnetwork resource 'projects/kubemark-scalability-testing/regions/us-east1/subnetworks/kubemark-5000-custom-subnet' is already being used by 'projects/kubemark-scalability-testing/regions/us-east1/addresses/kubemark-5000-kubemark-master-internal-ip'
W0104 19:30:28.298] 
W0104 19:30:28.962] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/regions/us-east1/addresses/kubemark-5000-kubemark-master-ip].
W0104 19:30:32.276] ERROR: (gcloud.compute.networks.delete) Could not fetch resource:
W0104 19:30:32.277]  - The network resource 'projects/kubemark-scalability-testing/global/networks/kubemark-5000' is already being used by 'projects/kubemark-scalability-testing/regions/us-east1/subnetworks/kubemark-5000-custom-subnet'
W0104 19:30:32.277] 
I0104 19:30:32.377] Failed to delete network 'kubemark-5000'. Listing firewall-rules:
W0104 19:30:33.082] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/regions/us-east1/addresses/kubemark-5000-kubemark-master-internal-ip].
W0104 19:30:33.334] 
W0104 19:30:33.334] To show all fields of the firewall, please show in JSON format: --format=json
W0104 19:30:33.334] To show all fields in table format, please see the examples in --help.
W0104 19:30:33.334] 
W0104 19:30:33.626] W0104 19:30:33.626056   36135 loader.go:223] Config not found: /go/src/k8s.io/kubernetes/kubernetes/test/kubemark/resources/kubeconfig.kubemark
... skipping 17 lines ...
I0104 19:30:42.745] Property "users.kubemark-scalability-testing_kubemark-5000-kubemark-basic-auth" unset.
I0104 19:30:42.952] Property "contexts.kubemark-scalability-testing_kubemark-5000-kubemark" unset.
I0104 19:30:42.961] Cleared config for kubemark-scalability-testing_kubemark-5000-kubemark from /workspace/.kube/config
I0104 19:30:42.961] Done
W0104 19:30:42.991] 2020/01/04 19:30:42 process.go:155: Step './test/kubemark/stop-kubemark.sh' finished in 9m30.567883399s
W0104 19:30:42.991] 2020/01/04 19:30:42 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0104 19:30:42.992] 2020/01/04 19:30:42 main.go:319: Something went wrong: encountered 1 errors: [error during /go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1213532076272259074 --nodes=5000 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml --testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml --testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml --testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml --testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml --testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml: exit status 1]
W0104 19:30:42.992] Traceback (most recent call last):
W0104 19:30:42.992]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 778, in <module>
W0104 19:30:42.993]     main(parse_args())
W0104 19:30:42.993]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 626, in main
W0104 19:30:42.993]     mode.start(runner_args)
W0104 19:30:42.993]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0104 19:30:42.993]     check_env(env, self.command, *args)
W0104 19:30:42.993]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0104 19:30:42.993]     subprocess.check_call(cmd, env=env)
W0104 19:30:42.993]   File "/usr/lib/python2.7/subprocess.py", line 190, in check_call
W0104 19:30:42.994]     raise CalledProcessError(retcode, cmd)
W0104 19:30:42.995] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--provider=gce', '--cluster=kubemark-5000', '--gcp-network=kubemark-5000', '--extract=ci/latest', '--gcp-node-image=gci', '--gcp-node-size=n1-standard-8', '--gcp-nodes=84', '--gcp-project=kubemark-scalability-testing', '--gcp-zone=us-east1-b', '--kubemark', '--kubemark-nodes=5000', '--test_args=--ginkgo.focus=xxxx', '--test-cmd=/go/src/k8s.io/perf-tests/run-e2e.sh', '--test-cmd-args=cluster-loader2', '--test-cmd-args=--experimental-gcp-snapshot-prometheus-disk=true', '--test-cmd-args=--experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1213532076272259074', '--test-cmd-args=--nodes=5000', '--test-cmd-args=--provider=kubemark', '--test-cmd-args=--report-dir=/workspace/_artifacts', '--test-cmd-args=--testconfig=testing/density/config.yaml', '--test-cmd-args=--testconfig=testing/load/config.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/enable_restart_count_check.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml', '--test-cmd-name=ClusterLoaderV2', '--timeout=1080m', '--logexporter-gcs-path=gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1213532076272259074/artifacts')' returned non-zero exit status 1
E0104 19:30:42.995] Command failed
I0104 19:30:42.995] process 511 exited with code 1 after 42.7m
E0104 19:30:42.996] FAIL: ci-kubernetes-kubemark-gce-scale
I0104 19:30:42.996] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0104 19:30:43.633] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0104 19:30:43.698] process 36685 exited with code 0 after 0.0m
I0104 19:30:43.698] Call:  gcloud config get-value account
I0104 19:30:44.100] process 36698 exited with code 0 after 0.0m
I0104 19:30:44.101] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
... skipping 21 lines ...