This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 17 succeeded
Started2019-12-29 17:54
Elapsed45m2s
Revision
Buildergke-prow-ssd-pool-1a225945-9tvq
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/113460a9-0db9-4228-8eff-b5cf55cac8da/targets/test'}}
pod1991d50b-2a64-11ea-a07b-c6eb1bf16817
resultstorehttps://source.cloud.google.com/results/invocations/113460a9-0db9-4228-8eff-b5cf55cac8da/targets/test
infra-commit09d9247dc
job-versionv1.18.0-alpha.1.224+886cf062a4ccfe
pod1991d50b-2a64-11ea-a07b-c6eb1bf16817
repok8s.io/kubernetes
repo-commit886cf062a4ccfed9383ece118470eb7d4b075157
repos{u'k8s.io/kubernetes': u'master', u'k8s.io/perf-tests': u'master'}
revisionv1.18.0-alpha.1.224+886cf062a4ccfe

Test Failures


ClusterLoaderV2 15m38s

error during /go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1211344557824806913 --nodes=5000 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml --testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml --testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml --testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml --testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml --testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 17 Passed Tests

Error lines from build-log.txt

... skipping 428 lines ...
W1229 18:01:13.969] Trying to find master named 'kubemark-5000-master'
W1229 18:01:13.969] Looking for address 'kubemark-5000-master-ip'
W1229 18:01:15.049] Looking for address 'kubemark-5000-master-internal-ip'
I1229 18:01:16.083] Waiting up to 300 seconds for cluster initialization.
I1229 18:01:16.083] 
I1229 18:01:16.083]   This will continually check to see if the API for kubernetes is reachable.
I1229 18:01:16.084]   This may time out if there was some uncaught error during start up.
I1229 18:01:16.084] 
W1229 18:01:16.185] Using master: kubemark-5000-master (external IP: 35.237.157.213; internal IP: 10.40.0.2)
I1229 18:01:16.286] Kubernetes cluster created.
I1229 18:01:16.597] Cluster "kubemark-scalability-testing_kubemark-5000" set.
I1229 18:01:16.902] User "kubemark-scalability-testing_kubemark-5000" set.
I1229 18:01:17.207] Context "kubemark-scalability-testing_kubemark-5000" created.
... skipping 102 lines ...
I1229 18:02:15.310] kubemark-5000-minion-group-xgmh   Ready                      <none>   14s   v1.18.0-alpha.1.224+886cf062a4ccfe
I1229 18:02:15.311] kubemark-5000-minion-group-xs3d   Ready                      <none>   13s   v1.18.0-alpha.1.224+886cf062a4ccfe
I1229 18:02:15.311] kubemark-5000-minion-group-xz98   Ready                      <none>   19s   v1.18.0-alpha.1.224+886cf062a4ccfe
I1229 18:02:15.311] kubemark-5000-minion-group-z06x   Ready                      <none>   16s   v1.18.0-alpha.1.224+886cf062a4ccfe
I1229 18:02:15.311] kubemark-5000-minion-heapster     Ready                      <none>   32s   v1.18.0-alpha.1.224+886cf062a4ccfe
I1229 18:02:15.770] Validate output:
I1229 18:02:16.212] NAME                 STATUS    MESSAGE             ERROR
I1229 18:02:16.213] controller-manager   Healthy   ok                  
I1229 18:02:16.213] scheduler            Healthy   ok                  
I1229 18:02:16.214] etcd-1               Healthy   {"health":"true"}   
I1229 18:02:16.215] etcd-0               Healthy   {"health":"true"}   
I1229 18:02:16.232] Cluster validation succeeded
W1229 18:02:16.332] Done, listing cluster services:
... skipping 219 lines ...
W1229 18:05:14.723] Trying to find master named 'kubemark-5000-kubemark-master'
W1229 18:05:14.723] Looking for address 'kubemark-5000-kubemark-master-ip'
W1229 18:05:15.730] Looking for address 'kubemark-5000-kubemark-master-internal-ip'
I1229 18:05:16.760] Waiting up to 300 seconds for cluster initialization.
I1229 18:05:16.760] 
I1229 18:05:16.761]   This will continually check to see if the API for kubernetes is reachable.
I1229 18:05:16.761]   This may time out if there was some uncaught error during start up.
I1229 18:05:16.761] 
I1229 18:05:42.731] ........Kubernetes cluster created.
W1229 18:05:42.833] Using master: kubemark-5000-kubemark-master (external IP: 35.243.232.192; internal IP: 10.40.3.216)
I1229 18:05:43.049] Cluster "kubemark-scalability-testing_kubemark-5000-kubemark" set.
I1229 18:05:43.360] User "kubemark-scalability-testing_kubemark-5000-kubemark" set.
I1229 18:05:43.695] Context "kubemark-scalability-testing_kubemark-5000-kubemark" created.
... skipping 19 lines ...
I1229 18:06:13.536] Found 0 Nodes, allowing additional 2 iterations for other Nodes to join.
I1229 18:06:13.536] Waiting for 1 ready nodes. 0 ready nodes, 1 registered. Retrying.
I1229 18:06:29.022] Found 1 node(s).
I1229 18:06:29.472] NAME                            STATUS                     ROLES    AGE   VERSION
I1229 18:06:29.473] kubemark-5000-kubemark-master   Ready,SchedulingDisabled   <none>   28s   v1.18.0-alpha.1.224+886cf062a4ccfe
I1229 18:06:29.945] Validate output:
I1229 18:06:30.390] NAME                 STATUS    MESSAGE             ERROR
I1229 18:06:30.390] controller-manager   Healthy   ok                  
I1229 18:06:30.391] scheduler            Healthy   ok                  
I1229 18:06:30.391] etcd-0               Healthy   {"health":"true"}   
I1229 18:06:30.392] etcd-1               Healthy   {"health":"true"}   
I1229 18:06:30.408] Cluster validation succeeded
W1229 18:06:30.509] Done, listing cluster services:
... skipping 5147 lines ...
W1229 18:12:45.659] I1229 18:12:45.658734   28156 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/prometheus-serviceMonitor.yaml
W1229 18:12:45.697] I1229 18:12:45.696775   28156 prometheus.go:201] Exposing kube-apiserver metrics in kubemark cluster
W1229 18:12:45.854] I1229 18:12:45.854334   28156 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-endpoints.yaml
W1229 18:12:45.894] I1229 18:12:45.894055   28156 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-service.yaml
W1229 18:12:45.932] I1229 18:12:45.932286   28156 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-serviceMonitor.yaml
W1229 18:12:45.972] I1229 18:12:45.972087   28156 prometheus.go:277] Waiting for Prometheus stack to become healthy...
W1229 18:13:16.010] W1229 18:13:16.010443   28156 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1229 18:13:46.011] W1229 18:13:46.010972   28156 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1229 18:14:16.011] W1229 18:14:16.011579   28156 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1229 18:14:46.013] W1229 18:14:46.012708   28156 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1229 18:15:16.012] W1229 18:15:16.011551   28156 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1229 18:15:46.011] W1229 18:15:46.011169   28156 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1229 18:16:16.011] W1229 18:16:16.011504   28156 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1229 18:16:46.017] W1229 18:16:46.016837   28156 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1229 18:17:16.014] W1229 18:17:16.013843   28156 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1229 18:17:46.012] W1229 18:17:46.012553   28156 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1229 18:18:16.012] W1229 18:18:16.011671   28156 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1229 18:18:46.017] W1229 18:18:46.017536   28156 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1229 18:19:16.013] W1229 18:19:16.012711   28156 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1229 18:19:46.016] W1229 18:19:46.016017   28156 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1229 18:20:16.018] W1229 18:20:16.018512   28156 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1229 18:20:46.013] W1229 18:20:46.013209   28156 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1229 18:21:16.012] W1229 18:21:16.011712   28156 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1229 18:21:46.010] W1229 18:21:46.010482   28156 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1229 18:22:16.011] W1229 18:22:16.010836   28156 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1229 18:22:46.012] W1229 18:22:46.011709   28156 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1229 18:23:16.012] W1229 18:23:16.012423   28156 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1229 18:23:46.012] W1229 18:23:46.011969   28156 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1229 18:24:16.012] W1229 18:24:16.012447   28156 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1229 18:24:46.011] W1229 18:24:46.011410   28156 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1229 18:25:16.012] W1229 18:25:16.012173   28156 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1229 18:25:46.011] W1229 18:25:46.011605   28156 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1229 18:26:16.015] W1229 18:26:16.015173   28156 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1229 18:26:46.011] W1229 18:26:46.011556   28156 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1229 18:27:16.012] W1229 18:27:16.011907   28156 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1229 18:27:46.015] W1229 18:27:46.014795   28156 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1229 18:27:46.053] W1229 18:27:46.053450   28156 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1229 18:27:46.054] I1229 18:27:46.053509   28156 prometheus.go:325] Dumping monitoring/prometheus-k8s events...
W1229 18:27:46.092] I1229 18:27:46.090552   28156 prometheus.go:336] {
W1229 18:27:46.092]   "metadata": {
W1229 18:27:46.092]     "selfLink": "/api/v1/namespaces/monitoring/events",
W1229 18:27:46.092]     "resourceVersion": "74726"
W1229 18:27:46.092]   },
... skipping 57 lines ...
W1229 18:27:46.115]       "eventTime": null,
W1229 18:27:46.116]       "reportingComponent": "",
W1229 18:27:46.116]       "reportingInstance": ""
W1229 18:27:46.116]     }
W1229 18:27:46.116]   ]
W1229 18:27:46.116] }
W1229 18:27:46.116] F1229 18:27:46.090582   28156 clusterloader.go:248] Error while setting up prometheus stack: timed out waiting for the condition
W1229 18:27:46.154] 2019/12/29 18:27:46 process.go:155: Step '/go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1211344557824806913 --nodes=5000 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml --testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml --testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml --testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml --testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml --testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml' finished in 15m38.063352686s
W1229 18:27:46.155] 2019/12/29 18:27:46 e2e.go:531: Dumping logs from nodes to GCS directly at path: gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1211344557824806913/artifacts
W1229 18:27:46.156] 2019/12/29 18:27:46 process.go:153: Running: ./cluster/log-dump/log-dump.sh /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1211344557824806913/artifacts
W1229 18:27:46.156] 2019/12/29 18:27:46 process.go:153: Running: ./test/kubemark/master-log-dump.sh /workspace/_artifacts
I1229 18:27:46.257] Checking for custom logdump instances, if any
I1229 18:27:46.258] Dumping logs for kubemark master: kubemark-5000-kubemark-master
... skipping 22 lines ...
W1229 18:28:31.787] 
W1229 18:28:31.787] Specify --start=47760 in the next get-serial-port-output invocation to get only the new output starting from here.
W1229 18:28:38.651] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W1229 18:28:38.731] scp: /var/log/fluentd.log*: No such file or directory
W1229 18:28:38.731] scp: /var/log/kubelet.cov*: No such file or directory
W1229 18:28:38.734] scp: /var/log/startupscript.log*: No such file or directory
W1229 18:28:38.741] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I1229 18:28:38.854] Dumping logs from nodes to GCS directly at 'gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1211344557824806913/artifacts' using logexporter
I1229 18:28:38.855] Detecting nodes in the cluster
I1229 18:28:44.543] namespace/logexporter created
I1229 18:28:44.592] secret/google-service-account created
I1229 18:28:44.634] daemonset.apps/logexporter created
W1229 18:28:46.124] CommandException: One or more URLs matched no objects.
W1229 18:29:02.694] CommandException: One or more URLs matched no objects.
W1229 18:29:09.556] scp: /var/log/glbc.log*: No such file or directory
W1229 18:29:09.557] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W1229 18:29:09.625] scp: /var/log/fluentd.log*: No such file or directory
W1229 18:29:09.626] scp: /var/log/kubelet.cov*: No such file or directory
W1229 18:29:09.628] scp: /var/log/startupscript.log*: No such file or directory
W1229 18:29:09.641] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I1229 18:29:09.752] Skipping dumping of node logs
W1229 18:29:09.853] 2019/12/29 18:29:09 process.go:155: Step './test/kubemark/master-log-dump.sh /workspace/_artifacts' finished in 1m23.598374479s
W1229 18:29:09.854] 2019/12/29 18:29:09 process.go:153: Running: ./test/kubemark/stop-kubemark.sh
I1229 18:29:19.753] Successfully listed marker files for successful nodes
I1229 18:29:20.386] Fetching logs from logexporter-2s4d2 running on kubemark-5000-minion-group-t0kn
I1229 18:29:20.392] Fetching logs from logexporter-44wg2 running on kubemark-5000-minion-group-blwp
... skipping 235 lines ...
W1229 18:36:54.398] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/regions/us-east1/routers/kubemark-5000-nat-router].
I1229 18:36:55.587] Deleting firewall rules remaining in network kubemark-5000: kubemark-5000-kubemark-default-internal-master
I1229 18:36:55.588] kubemark-5000-kubemark-default-internal-node
I1229 18:36:55.588] kubemark-5000-kubemark-master-etcd
I1229 18:36:55.588] kubemark-5000-kubemark-master-https
I1229 18:36:55.588] kubemark-5000-kubemark-minion-all
W1229 18:37:00.807] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W1229 18:37:00.808]  - The resource 'projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-master-etcd' is not ready
W1229 18:37:00.808] 
W1229 18:37:01.349] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W1229 18:37:01.349]  - The resource 'projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-master-https' was not found
W1229 18:37:01.350] 
W1229 18:37:02.611] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W1229 18:37:02.611]  - The resource 'projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-minion-all' is not ready
W1229 18:37:02.612] 
W1229 18:37:03.498] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-master-https].
W1229 18:37:04.934] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-master-etcd].
W1229 18:37:05.866] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-minion-all].
W1229 18:37:06.155] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-default-internal-node].
W1229 18:37:09.197] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-default-internal-master].
W1229 18:37:09.326] Failed to delete firewall rules.
I1229 18:37:10.369] Deleting custom subnet...
W1229 18:37:11.573] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/regions/us-east1/addresses/kubemark-5000-kubemark-master-ip].
W1229 18:37:11.688] ERROR: (gcloud.compute.networks.subnets.delete) Could not fetch resource:
W1229 18:37:11.689]  - The subnetwork resource 'projects/kubemark-scalability-testing/regions/us-east1/subnetworks/kubemark-5000-custom-subnet' is already being used by 'projects/kubemark-scalability-testing/regions/us-east1/addresses/kubemark-5000-kubemark-master-internal-ip'
W1229 18:37:11.689] 
W1229 18:37:15.754] ERROR: (gcloud.compute.networks.delete) Could not fetch resource:
W1229 18:37:15.755]  - The network resource 'projects/kubemark-scalability-testing/global/networks/kubemark-5000' is already being used by 'projects/kubemark-scalability-testing/regions/us-east1/subnetworks/kubemark-5000-custom-subnet'
W1229 18:37:15.755] 
I1229 18:37:15.857] Failed to delete network 'kubemark-5000'. Listing firewall-rules:
W1229 18:37:16.176] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/regions/us-east1/addresses/kubemark-5000-kubemark-master-internal-ip].
W1229 18:37:16.823] 
W1229 18:37:16.823] To show all fields of the firewall, please show in JSON format: --format=json
W1229 18:37:16.823] To show all fields in table format, please see the examples in --help.
W1229 18:37:16.823] 
W1229 18:37:17.173] W1229 18:37:17.173346   35787 loader.go:223] Config not found: /go/src/k8s.io/kubernetes/kubernetes/test/kubemark/resources/kubeconfig.kubemark
... skipping 17 lines ...
I1229 18:37:26.421] Property "users.kubemark-scalability-testing_kubemark-5000-kubemark-basic-auth" unset.
I1229 18:37:26.709] Property "contexts.kubemark-scalability-testing_kubemark-5000-kubemark" unset.
I1229 18:37:26.726] Cleared config for kubemark-scalability-testing_kubemark-5000-kubemark from /workspace/.kube/config
I1229 18:37:26.727] Done
W1229 18:37:26.777] 2019/12/29 18:37:26 process.go:155: Step './test/kubemark/stop-kubemark.sh' finished in 8m16.976417137s
W1229 18:37:26.778] 2019/12/29 18:37:26 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W1229 18:37:26.779] 2019/12/29 18:37:26 main.go:319: Something went wrong: encountered 1 errors: [error during /go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1211344557824806913 --nodes=5000 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml --testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml --testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml --testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml --testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml --testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml: exit status 1]
W1229 18:37:26.779] Traceback (most recent call last):
W1229 18:37:26.780]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 778, in <module>
W1229 18:37:26.780]     main(parse_args())
W1229 18:37:26.780]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 626, in main
W1229 18:37:26.780]     mode.start(runner_args)
W1229 18:37:26.780]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W1229 18:37:26.781]     check_env(env, self.command, *args)
W1229 18:37:26.781]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W1229 18:37:26.782]     subprocess.check_call(cmd, env=env)
W1229 18:37:26.782]   File "/usr/lib/python2.7/subprocess.py", line 190, in check_call
W1229 18:37:26.782]     raise CalledProcessError(retcode, cmd)
W1229 18:37:26.784] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--provider=gce', '--cluster=kubemark-5000', '--gcp-network=kubemark-5000', '--extract=ci/latest', '--gcp-node-image=gci', '--gcp-node-size=n1-standard-8', '--gcp-nodes=84', '--gcp-project=kubemark-scalability-testing', '--gcp-zone=us-east1-b', '--kubemark', '--kubemark-nodes=5000', '--test_args=--ginkgo.focus=xxxx', '--test-cmd=/go/src/k8s.io/perf-tests/run-e2e.sh', '--test-cmd-args=cluster-loader2', '--test-cmd-args=--experimental-gcp-snapshot-prometheus-disk=true', '--test-cmd-args=--experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1211344557824806913', '--test-cmd-args=--nodes=5000', '--test-cmd-args=--provider=kubemark', '--test-cmd-args=--report-dir=/workspace/_artifacts', '--test-cmd-args=--testconfig=testing/density/config.yaml', '--test-cmd-args=--testconfig=testing/load/config.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/enable_restart_count_check.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml', '--test-cmd-name=ClusterLoaderV2', '--timeout=1080m', '--logexporter-gcs-path=gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1211344557824806913/artifacts')' returned non-zero exit status 1
E1229 18:37:26.784] Command failed
I1229 18:37:26.785] process 505 exited with code 1 after 41.6m
E1229 18:37:26.785] FAIL: ci-kubernetes-kubemark-gce-scale
I1229 18:37:26.786] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W1229 18:37:27.434] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I1229 18:37:27.493] process 36332 exited with code 0 after 0.0m
I1229 18:37:27.494] Call:  gcloud config get-value account
I1229 18:37:27.878] process 36345 exited with code 0 after 0.0m
I1229 18:37:27.879] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
... skipping 21 lines ...