This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 17 succeeded
Started2019-12-30 17:56
Elapsed43m55s
Revision
Buildergke-prow-ssd-pool-1a225945-z5n1
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/d655b1a3-0ce6-4b27-a640-d35cf12c9f71/targets/test'}}
pod8cf04824-2b2d-11ea-a07b-c6eb1bf16817
resultstorehttps://source.cloud.google.com/results/invocations/d655b1a3-0ce6-4b27-a640-d35cf12c9f71/targets/test
infra-commit2a734fb01
job-versionv1.18.0-alpha.1.240+a1364be0126743
pod8cf04824-2b2d-11ea-a07b-c6eb1bf16817
repok8s.io/kubernetes
repo-commita1364be0126743c5cc032c21f28fb5e41f636253
repos{u'k8s.io/kubernetes': u'master', u'k8s.io/perf-tests': u'master'}
revisionv1.18.0-alpha.1.240+a1364be0126743

Test Failures


ClusterLoaderV2 15m32s

error during /go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1211707464022495235 --nodes=5000 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml --testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml --testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml --testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml --testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml --testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 17 Passed Tests

Error lines from build-log.txt

... skipping 429 lines ...
W1230 18:02:44.939] Trying to find master named 'kubemark-5000-master'
W1230 18:02:44.939] Looking for address 'kubemark-5000-master-ip'
W1230 18:02:45.989] Looking for address 'kubemark-5000-master-internal-ip'
I1230 18:02:46.937] Waiting up to 300 seconds for cluster initialization.
I1230 18:02:46.937] 
I1230 18:02:46.937]   This will continually check to see if the API for kubernetes is reachable.
I1230 18:02:46.938]   This may time out if there was some uncaught error during start up.
I1230 18:02:46.938] 
W1230 18:02:47.038] Using master: kubemark-5000-master (external IP: 35.237.157.213; internal IP: 10.40.0.2)
I1230 18:02:47.139] Kubernetes cluster created.
I1230 18:02:47.267] Cluster "kubemark-scalability-testing_kubemark-5000" set.
I1230 18:02:47.433] User "kubemark-scalability-testing_kubemark-5000" set.
I1230 18:02:47.635] Context "kubemark-scalability-testing_kubemark-5000" created.
... skipping 103 lines ...
I1230 18:04:00.346] kubemark-5000-minion-group-z4pt   Ready                      <none>   40s   v1.18.0-alpha.1.240+a1364be0126743
I1230 18:04:00.346] kubemark-5000-minion-group-zd6p   Ready                      <none>   42s   v1.18.0-alpha.1.240+a1364be0126743
I1230 18:04:00.347] kubemark-5000-minion-group-zj2z   Ready                      <none>   43s   v1.18.0-alpha.1.240+a1364be0126743
I1230 18:04:00.347] kubemark-5000-minion-group-zvv6   Ready                      <none>   43s   v1.18.0-alpha.1.240+a1364be0126743
I1230 18:04:00.347] kubemark-5000-minion-heapster     Ready                      <none>   55s   v1.18.0-alpha.1.240+a1364be0126743
I1230 18:04:00.645] Validate output:
I1230 18:04:00.920] NAME                 STATUS    MESSAGE             ERROR
I1230 18:04:00.921] scheduler            Healthy   ok                  
I1230 18:04:00.921] etcd-0               Healthy   {"health":"true"}   
I1230 18:04:00.921] controller-manager   Healthy   ok                  
I1230 18:04:00.921] etcd-1               Healthy   {"health":"true"}   
I1230 18:04:00.929] Cluster validation succeeded
W1230 18:04:01.029] Done, listing cluster services:
... skipping 219 lines ...
W1230 18:06:52.525] Trying to find master named 'kubemark-5000-kubemark-master'
W1230 18:06:52.525] Looking for address 'kubemark-5000-kubemark-master-ip'
W1230 18:06:53.441] Looking for address 'kubemark-5000-kubemark-master-internal-ip'
I1230 18:06:54.379] Waiting up to 300 seconds for cluster initialization.
I1230 18:06:54.379] 
I1230 18:06:54.380]   This will continually check to see if the API for kubernetes is reachable.
I1230 18:06:54.380]   This may time out if there was some uncaught error during start up.
I1230 18:06:54.380] 
I1230 18:07:14.515] .....Kubernetes cluster created.
W1230 18:07:14.616] Using master: kubemark-5000-kubemark-master (external IP: 35.243.232.192; internal IP: 10.40.3.216)
I1230 18:07:14.717] Cluster "kubemark-scalability-testing_kubemark-5000-kubemark" set.
I1230 18:07:14.870] User "kubemark-scalability-testing_kubemark-5000-kubemark" set.
I1230 18:07:15.055] Context "kubemark-scalability-testing_kubemark-5000-kubemark" created.
... skipping 19 lines ...
I1230 18:07:39.772] Found 0 Nodes, allowing additional 2 iterations for other Nodes to join.
I1230 18:07:39.772] Waiting for 1 ready nodes. 0 ready nodes, 1 registered. Retrying.
I1230 18:07:55.091] Found 1 node(s).
I1230 18:07:55.386] NAME                            STATUS                     ROLES    AGE   VERSION
I1230 18:07:55.386] kubemark-5000-kubemark-master   Ready,SchedulingDisabled   <none>   21s   v1.18.0-alpha.1.240+a1364be0126743
I1230 18:07:55.710] Validate output:
I1230 18:07:55.998] NAME                 STATUS    MESSAGE             ERROR
I1230 18:07:55.999] scheduler            Healthy   ok                  
I1230 18:07:55.999] controller-manager   Healthy   ok                  
I1230 18:07:55.999] etcd-1               Healthy   {"health":"true"}   
I1230 18:07:55.999] etcd-0               Healthy   {"health":"true"}   
I1230 18:07:56.006] Cluster validation succeeded
W1230 18:07:56.107] Done, listing cluster services:
... skipping 5148 lines ...
W1230 18:14:23.809] I1230 18:14:23.809476   28965 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/prometheus-serviceMonitor.yaml
W1230 18:14:23.847] I1230 18:14:23.847391   28965 prometheus.go:201] Exposing kube-apiserver metrics in kubemark cluster
W1230 18:14:24.003] I1230 18:14:24.003391   28965 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-endpoints.yaml
W1230 18:14:24.042] I1230 18:14:24.042050   28965 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-service.yaml
W1230 18:14:24.080] I1230 18:14:24.079801   28965 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-serviceMonitor.yaml
W1230 18:14:24.118] I1230 18:14:24.118554   28965 prometheus.go:277] Waiting for Prometheus stack to become healthy...
W1230 18:14:54.159] W1230 18:14:54.158633   28965 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 18:15:24.158] W1230 18:15:24.157989   28965 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 18:15:54.158] W1230 18:15:54.157799   28965 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 18:16:24.162] W1230 18:16:24.162631   28965 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 18:16:54.160] W1230 18:16:54.159726   28965 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 18:17:24.157] W1230 18:17:24.157582   28965 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 18:17:54.157] W1230 18:17:54.157644   28965 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 18:18:24.158] W1230 18:18:24.157845   28965 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 18:18:54.158] W1230 18:18:54.158042   28965 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 18:19:24.157] W1230 18:19:24.157647   28965 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 18:19:54.157] W1230 18:19:54.157530   28965 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 18:20:24.163] W1230 18:20:24.158775   28965 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 18:20:54.159] W1230 18:20:54.158651   28965 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 18:21:24.158] W1230 18:21:24.157939   28965 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 18:21:54.159] W1230 18:21:54.158870   28965 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 18:22:24.158] W1230 18:22:24.158135   28965 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 18:22:54.158] W1230 18:22:54.157884   28965 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 18:23:24.159] W1230 18:23:24.158731   28965 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 18:23:54.157] W1230 18:23:54.157398   28965 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 18:24:24.157] W1230 18:24:24.157550   28965 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 18:24:54.159] W1230 18:24:54.158778   28965 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 18:25:24.159] W1230 18:25:24.158606   28965 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 18:25:54.157] W1230 18:25:54.157312   28965 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 18:26:24.158] W1230 18:26:24.158094   28965 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 18:26:54.158] W1230 18:26:54.158178   28965 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 18:27:24.158] W1230 18:27:24.158409   28965 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 18:27:54.159] W1230 18:27:54.158877   28965 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 18:28:24.158] W1230 18:28:24.158460   28965 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 18:28:54.158] W1230 18:28:54.158045   28965 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 18:29:24.161] W1230 18:29:24.161330   28965 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 18:29:24.198] W1230 18:29:24.198155   28965 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 18:29:24.199] I1230 18:29:24.198346   28965 prometheus.go:325] Dumping monitoring/prometheus-k8s events...
W1230 18:29:24.235] I1230 18:29:24.235180   28965 prometheus.go:336] {
W1230 18:29:24.236]   "metadata": {
W1230 18:29:24.236]     "selfLink": "/api/v1/namespaces/monitoring/events",
W1230 18:29:24.236]     "resourceVersion": "74638"
W1230 18:29:24.236]   },
... skipping 57 lines ...
W1230 18:29:24.247]       "eventTime": null,
W1230 18:29:24.248]       "reportingComponent": "",
W1230 18:29:24.248]       "reportingInstance": ""
W1230 18:29:24.248]     }
W1230 18:29:24.248]   ]
W1230 18:29:24.248] }
W1230 18:29:24.248] F1230 18:29:24.235345   28965 clusterloader.go:248] Error while setting up prometheus stack: timed out waiting for the condition
W1230 18:29:24.275] 2019/12/30 18:29:24 process.go:155: Step '/go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1211707464022495235 --nodes=5000 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml --testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml --testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml --testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml --testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml --testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml' finished in 15m32.720464674s
W1230 18:29:24.275] 2019/12/30 18:29:24 e2e.go:531: Dumping logs from nodes to GCS directly at path: gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1211707464022495235/artifacts
W1230 18:29:24.275] 2019/12/30 18:29:24 process.go:153: Running: ./cluster/log-dump/log-dump.sh /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1211707464022495235/artifacts
W1230 18:29:24.277] 2019/12/30 18:29:24 process.go:153: Running: ./test/kubemark/master-log-dump.sh /workspace/_artifacts
W1230 18:29:24.367] Trying to find master named 'kubemark-5000-master'
W1230 18:29:24.368] Looking for address 'kubemark-5000-master-internal-ip'
... skipping 22 lines ...
W1230 18:30:05.461] 
W1230 18:30:05.461] Specify --start=47796 in the next get-serial-port-output invocation to get only the new output starting from here.
W1230 18:30:11.780] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W1230 18:30:11.848] scp: /var/log/fluentd.log*: No such file or directory
W1230 18:30:11.849] scp: /var/log/kubelet.cov*: No such file or directory
W1230 18:30:11.850] scp: /var/log/startupscript.log*: No such file or directory
W1230 18:30:11.853] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I1230 18:30:11.954] Dumping logs from nodes to GCS directly at 'gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1211707464022495235/artifacts' using logexporter
I1230 18:30:11.954] Detecting nodes in the cluster
I1230 18:30:16.564] namespace/logexporter created
I1230 18:30:16.600] secret/google-service-account created
I1230 18:30:16.638] daemonset.apps/logexporter created
W1230 18:30:17.798] CommandException: One or more URLs matched no objects.
W1230 18:30:34.058] CommandException: One or more URLs matched no objects.
W1230 18:30:40.442] scp: /var/log/glbc.log*: No such file or directory
W1230 18:30:40.442] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W1230 18:30:40.510] scp: /var/log/fluentd.log*: No such file or directory
W1230 18:30:40.510] scp: /var/log/kubelet.cov*: No such file or directory
W1230 18:30:40.510] scp: /var/log/startupscript.log*: No such file or directory
W1230 18:30:40.516] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I1230 18:30:40.621] Skipping dumping of node logs
W1230 18:30:40.722] 2019/12/30 18:30:40 process.go:155: Step './test/kubemark/master-log-dump.sh /workspace/_artifacts' finished in 1m16.344118321s
W1230 18:30:40.722] 2019/12/30 18:30:40 process.go:153: Running: ./test/kubemark/stop-kubemark.sh
I1230 18:30:50.460] Successfully listed marker files for successful nodes
I1230 18:31:06.813] Successfully listed marker files for successful nodes
I1230 18:31:07.246] Fetching logs from logexporter-279pm running on kubemark-5000-minion-group-kstc
... skipping 236 lines ...
W1230 18:38:33.219] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/zones/us-east1-b/instances/kubemark-5000-kubemark-master].
I1230 18:38:33.478] Deleting firewall rules remaining in network kubemark-5000: kubemark-5000-kubemark-default-internal-master
I1230 18:38:33.478] kubemark-5000-kubemark-default-internal-node
I1230 18:38:33.479] kubemark-5000-kubemark-master-etcd
I1230 18:38:33.479] kubemark-5000-kubemark-master-https
I1230 18:38:33.479] kubemark-5000-kubemark-minion-all
W1230 18:38:38.354] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W1230 18:38:38.355]  - The resource 'projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-master-https' is not ready
W1230 18:38:38.355] 
W1230 18:38:38.778] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W1230 18:38:38.779]  - The resource 'projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-master-etcd' is not ready
W1230 18:38:38.779] 
W1230 18:38:39.663] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W1230 18:38:39.663]  - The resource 'projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-minion-all' is not ready
W1230 18:38:39.663] 
W1230 18:38:42.420] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-default-internal-master].
W1230 18:38:43.554] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-default-internal-node].
W1230 18:38:44.855] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-master-https].
W1230 18:38:44.945] Failed to delete firewall rules.
W1230 18:38:46.213] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-minion-all].
W1230 18:38:48.829] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-master-etcd].
W1230 18:38:48.922] Failed to delete firewall rules.
I1230 18:38:49.927] Deleting custom subnet...
W1230 18:38:50.875] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/regions/us-east1/addresses/kubemark-5000-kubemark-master-ip].
W1230 18:38:51.154] ERROR: (gcloud.compute.networks.subnets.delete) Could not fetch resource:
W1230 18:38:51.155]  - The subnetwork resource 'projects/kubemark-scalability-testing/regions/us-east1/subnetworks/kubemark-5000-custom-subnet' is already being used by 'projects/kubemark-scalability-testing/regions/us-east1/addresses/kubemark-5000-kubemark-master-internal-ip'
W1230 18:38:51.155] 
W1230 18:38:54.868] ERROR: (gcloud.compute.networks.delete) Could not fetch resource:
W1230 18:38:54.868]  - The network resource 'projects/kubemark-scalability-testing/global/networks/kubemark-5000' is already being used by 'projects/kubemark-scalability-testing/regions/us-east1/subnetworks/kubemark-5000-custom-subnet'
W1230 18:38:54.868] 
I1230 18:38:54.969] Failed to delete network 'kubemark-5000'. Listing firewall-rules:
W1230 18:38:55.768] 
W1230 18:38:55.768] To show all fields of the firewall, please show in JSON format: --format=json
W1230 18:38:55.769] To show all fields in table format, please see the examples in --help.
W1230 18:38:55.769] 
W1230 18:38:56.037] W1230 18:38:56.037637   36742 loader.go:223] Config not found: /go/src/k8s.io/kubernetes/kubernetes/test/kubemark/resources/kubeconfig.kubemark
W1230 18:38:56.237] W1230 18:38:56.237219   36791 loader.go:223] Config not found: /go/src/k8s.io/kubernetes/kubernetes/test/kubemark/resources/kubeconfig.kubemark
... skipping 17 lines ...
I1230 18:39:05.364] Property "users.kubemark-scalability-testing_kubemark-5000-kubemark-basic-auth" unset.
I1230 18:39:05.542] Property "contexts.kubemark-scalability-testing_kubemark-5000-kubemark" unset.
I1230 18:39:05.548] Cleared config for kubemark-scalability-testing_kubemark-5000-kubemark from /workspace/.kube/config
I1230 18:39:05.548] Done
W1230 18:39:05.581] 2019/12/30 18:39:05 process.go:155: Step './test/kubemark/stop-kubemark.sh' finished in 8m24.930576991s
W1230 18:39:05.582] 2019/12/30 18:39:05 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W1230 18:39:05.583] 2019/12/30 18:39:05 main.go:319: Something went wrong: encountered 1 errors: [error during /go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1211707464022495235 --nodes=5000 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml --testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml --testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml --testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml --testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml --testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml: exit status 1]
W1230 18:39:05.583] Traceback (most recent call last):
W1230 18:39:05.583]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 778, in <module>
W1230 18:39:05.583]     main(parse_args())
W1230 18:39:05.583]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 626, in main
W1230 18:39:05.583]     mode.start(runner_args)
W1230 18:39:05.583]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W1230 18:39:05.583]     check_env(env, self.command, *args)
W1230 18:39:05.584]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W1230 18:39:05.584]     subprocess.check_call(cmd, env=env)
W1230 18:39:05.584]   File "/usr/lib/python2.7/subprocess.py", line 190, in check_call
W1230 18:39:05.584]     raise CalledProcessError(retcode, cmd)
W1230 18:39:05.585] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--provider=gce', '--cluster=kubemark-5000', '--gcp-network=kubemark-5000', '--extract=ci/latest', '--gcp-node-image=gci', '--gcp-node-size=n1-standard-8', '--gcp-nodes=84', '--gcp-project=kubemark-scalability-testing', '--gcp-zone=us-east1-b', '--kubemark', '--kubemark-nodes=5000', '--test_args=--ginkgo.focus=xxxx', '--test-cmd=/go/src/k8s.io/perf-tests/run-e2e.sh', '--test-cmd-args=cluster-loader2', '--test-cmd-args=--experimental-gcp-snapshot-prometheus-disk=true', '--test-cmd-args=--experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1211707464022495235', '--test-cmd-args=--nodes=5000', '--test-cmd-args=--provider=kubemark', '--test-cmd-args=--report-dir=/workspace/_artifacts', '--test-cmd-args=--testconfig=testing/density/config.yaml', '--test-cmd-args=--testconfig=testing/load/config.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/enable_restart_count_check.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml', '--test-cmd-name=ClusterLoaderV2', '--timeout=1080m', '--logexporter-gcs-path=gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1211707464022495235/artifacts')' returned non-zero exit status 1
E1230 18:39:05.586] Command failed
I1230 18:39:05.586] process 508 exited with code 1 after 41.3m
E1230 18:39:05.586] FAIL: ci-kubernetes-kubemark-gce-scale
I1230 18:39:05.586] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W1230 18:39:06.152] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I1230 18:39:06.211] process 37304 exited with code 0 after 0.0m
I1230 18:39:06.211] Call:  gcloud config get-value account
I1230 18:39:06.577] process 37317 exited with code 0 after 0.0m
I1230 18:39:06.577] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
... skipping 21 lines ...