This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 17 succeeded
Started2020-01-04 06:45
Elapsed43m54s
Revision
Buildergke-prow-ssd-pool-1a225945-g9xl
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/36cab0dd-7b19-435a-884b-89dc57f7900e/targets/test'}}
poda9f834bc-2ebd-11ea-a07b-c6eb1bf16817
resultstorehttps://source.cloud.google.com/results/invocations/36cab0dd-7b19-435a-884b-89dc57f7900e/targets/test
infra-commitf6d82f412
job-versionv1.18.0-alpha.1.320+6be12b82354ae2
poda9f834bc-2ebd-11ea-a07b-c6eb1bf16817
repok8s.io/kubernetes
repo-commit6be12b82354ae2832338463071c7811dd0ac95ba
repos{u'k8s.io/kubernetes': u'master', u'k8s.io/perf-tests': u'master'}
revisionv1.18.0-alpha.1.320+6be12b82354ae2

Test Failures


ClusterLoaderV2 15m35s

error during /go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1213350622565240832 --nodes=5000 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml --testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml --testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml --testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml --testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml --testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 17 Passed Tests

Error lines from build-log.txt

... skipping 426 lines ...
W0104 06:51:30.194] Trying to find master named 'kubemark-5000-master'
W0104 06:51:30.194] Looking for address 'kubemark-5000-master-ip'
W0104 06:51:31.238] Looking for address 'kubemark-5000-master-internal-ip'
I0104 06:51:32.271] Waiting up to 300 seconds for cluster initialization.
I0104 06:51:32.272] 
I0104 06:51:32.272]   This will continually check to see if the API for kubernetes is reachable.
I0104 06:51:32.272]   This may time out if there was some uncaught error during start up.
I0104 06:51:32.272] 
W0104 06:51:32.373] Using master: kubemark-5000-master (external IP: 35.237.141.204; internal IP: 10.40.0.2)
I0104 06:51:39.460] .Kubernetes cluster created.
I0104 06:51:39.666] Cluster "kubemark-scalability-testing_kubemark-5000" set.
I0104 06:51:39.857] User "kubemark-scalability-testing_kubemark-5000" set.
I0104 06:51:40.053] Context "kubemark-scalability-testing_kubemark-5000" created.
... skipping 102 lines ...
I0104 06:52:36.681] kubemark-5000-minion-group-z1l4   Ready                      <none>   17s   v1.18.0-alpha.1.320+6be12b82354ae2
I0104 06:52:36.681] kubemark-5000-minion-group-zbrd   Ready                      <none>   23s   v1.18.0-alpha.1.320+6be12b82354ae2
I0104 06:52:36.681] kubemark-5000-minion-group-zmcw   Ready                      <none>   23s   v1.18.0-alpha.1.320+6be12b82354ae2
I0104 06:52:36.682] kubemark-5000-minion-group-zwww   Ready                      <none>   23s   v1.18.0-alpha.1.320+6be12b82354ae2
I0104 06:52:36.682] kubemark-5000-minion-heapster     Ready                      <none>   37s   v1.18.0-alpha.1.320+6be12b82354ae2
I0104 06:52:37.000] Validate output:
I0104 06:52:37.287] NAME                 STATUS    MESSAGE             ERROR
I0104 06:52:37.288] scheduler            Healthy   ok                  
I0104 06:52:37.288] etcd-1               Healthy   {"health":"true"}   
I0104 06:52:37.288] controller-manager   Healthy   ok                  
I0104 06:52:37.288] etcd-0               Healthy   {"health":"true"}   
I0104 06:52:37.297] Cluster validation succeeded
W0104 06:52:37.397] Done, listing cluster services:
... skipping 219 lines ...
W0104 06:55:15.622] Trying to find master named 'kubemark-5000-kubemark-master'
W0104 06:55:15.622] Looking for address 'kubemark-5000-kubemark-master-ip'
W0104 06:55:16.727] Looking for address 'kubemark-5000-kubemark-master-internal-ip'
I0104 06:55:17.728] Waiting up to 300 seconds for cluster initialization.
I0104 06:55:17.728] 
I0104 06:55:17.728]   This will continually check to see if the API for kubernetes is reachable.
I0104 06:55:17.728]   This may time out if there was some uncaught error during start up.
I0104 06:55:17.729] 
W0104 06:55:17.829] Using master: kubemark-5000-kubemark-master (external IP: 35.243.232.192; internal IP: 10.40.3.216)
I0104 06:55:49.455] ............Kubernetes cluster created.
I0104 06:55:49.665] Cluster "kubemark-scalability-testing_kubemark-5000-kubemark" set.
I0104 06:55:49.876] User "kubemark-scalability-testing_kubemark-5000-kubemark" set.
I0104 06:55:50.079] Context "kubemark-scalability-testing_kubemark-5000-kubemark" created.
... skipping 23 lines ...
I0104 06:56:31.058] NAME                            STATUS                        ROLES    AGE   VERSION
I0104 06:56:31.059] kubemark-5000-kubemark-master   NotReady,SchedulingDisabled   <none>   20s   v1.18.0-alpha.1.320+6be12b82354ae2
I0104 06:56:31.065] Found 1 node(s).
I0104 06:56:31.396] NAME                            STATUS                     ROLES    AGE   VERSION
I0104 06:56:31.397] kubemark-5000-kubemark-master   Ready,SchedulingDisabled   <none>   20s   v1.18.0-alpha.1.320+6be12b82354ae2
I0104 06:56:31.755] Validate output:
I0104 06:56:32.056] NAME                 STATUS    MESSAGE             ERROR
I0104 06:56:32.057] scheduler            Healthy   ok                  
I0104 06:56:32.057] controller-manager   Healthy   ok                  
I0104 06:56:32.057] etcd-1               Healthy   {"health":"true"}   
I0104 06:56:32.057] etcd-0               Healthy   {"health":"true"}   
I0104 06:56:32.068] Cluster validation encountered some problems, but cluster should be in working order
W0104 06:56:32.169] ...ignoring non-fatal errors in validate-cluster
W0104 06:56:32.169] Done, listing cluster services:
W0104 06:56:32.169] 
I0104 06:56:32.395] Kubernetes master is running at https://35.243.232.192
I0104 06:56:32.395] 
I0104 06:56:32.396] To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
I0104 06:56:32.404] 
... skipping 5143 lines ...
W0104 07:02:28.276] I0104 07:02:28.275683   28837 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/prometheus-serviceMonitor.yaml
W0104 07:02:28.315] I0104 07:02:28.314751   28837 prometheus.go:201] Exposing kube-apiserver metrics in kubemark cluster
W0104 07:02:28.467] I0104 07:02:28.467447   28837 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-endpoints.yaml
W0104 07:02:28.510] I0104 07:02:28.510031   28837 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-service.yaml
W0104 07:02:28.549] I0104 07:02:28.548847   28837 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-serviceMonitor.yaml
W0104 07:02:28.589] I0104 07:02:28.589064   28837 prometheus.go:277] Waiting for Prometheus stack to become healthy...
W0104 07:02:58.629] W0104 07:02:58.629370   28837 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 07:03:28.630] W0104 07:03:28.630336   28837 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 07:03:58.632] W0104 07:03:58.631896   28837 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 07:04:28.630] W0104 07:04:28.630559   28837 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 07:04:58.632] W0104 07:04:58.631763   28837 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 07:05:28.631] W0104 07:05:28.630740   28837 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 07:05:58.630] W0104 07:05:58.630462   28837 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 07:06:28.630] W0104 07:06:28.630553   28837 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 07:06:58.630] W0104 07:06:58.629714   28837 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 07:07:28.630] W0104 07:07:28.629837   28837 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 07:07:58.631] W0104 07:07:58.631538   28837 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 07:08:28.633] W0104 07:08:28.632756   28837 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 07:08:58.631] W0104 07:08:58.631055   28837 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 07:09:28.629] W0104 07:09:28.629601   28837 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 07:09:58.629] W0104 07:09:58.629346   28837 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 07:10:28.629] W0104 07:10:28.629545   28837 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 07:10:58.630] W0104 07:10:58.629862   28837 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 07:11:28.630] W0104 07:11:28.630434   28837 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 07:11:58.630] W0104 07:11:58.629948   28837 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 07:12:28.630] W0104 07:12:28.629737   28837 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 07:12:58.633] W0104 07:12:58.633172   28837 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 07:13:28.630] W0104 07:13:28.629847   28837 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 07:13:58.630] W0104 07:13:58.629770   28837 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 07:14:28.629] W0104 07:14:28.629385   28837 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 07:14:58.630] W0104 07:14:58.630371   28837 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 07:15:28.632] W0104 07:15:28.632405   28837 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 07:15:58.629] W0104 07:15:58.629511   28837 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 07:16:28.629] W0104 07:16:28.629096   28837 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 07:16:58.629] W0104 07:16:58.629639   28837 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 07:17:28.631] W0104 07:17:28.630689   28837 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 07:17:28.669] W0104 07:17:28.668238   28837 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0104 07:17:28.669] I0104 07:17:28.668273   28837 prometheus.go:325] Dumping monitoring/prometheus-k8s events...
W0104 07:17:28.707] I0104 07:17:28.706000   28837 prometheus.go:336] {
W0104 07:17:28.708]   "metadata": {
W0104 07:17:28.708]     "selfLink": "/api/v1/namespaces/monitoring/events",
W0104 07:17:28.708]     "resourceVersion": "74605"
W0104 07:17:28.708]   },
... skipping 57 lines ...
W0104 07:17:28.719]       "eventTime": null,
W0104 07:17:28.719]       "reportingComponent": "",
W0104 07:17:28.719]       "reportingInstance": ""
W0104 07:17:28.719]     }
W0104 07:17:28.719]   ]
W0104 07:17:28.719] }
W0104 07:17:28.720] F0104 07:17:28.706036   28837 clusterloader.go:248] Error while setting up prometheus stack: timed out waiting for the condition
W0104 07:17:28.747] 2020/01/04 07:17:28 process.go:155: Step '/go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1213350622565240832 --nodes=5000 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml --testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml --testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml --testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml --testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml --testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml' finished in 15m35.380115272s
W0104 07:17:28.748] 2020/01/04 07:17:28 e2e.go:531: Dumping logs from nodes to GCS directly at path: gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1213350622565240832/artifacts
W0104 07:17:28.748] 2020/01/04 07:17:28 process.go:153: Running: ./cluster/log-dump/log-dump.sh /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1213350622565240832/artifacts
W0104 07:17:28.748] 2020/01/04 07:17:28 process.go:153: Running: ./test/kubemark/master-log-dump.sh /workspace/_artifacts
W0104 07:17:28.838] Trying to find master named 'kubemark-5000-master'
W0104 07:17:28.839] Looking for address 'kubemark-5000-master-internal-ip'
... skipping 22 lines ...
W0104 07:18:11.604] 
W0104 07:18:11.605] Specify --start=47737 in the next get-serial-port-output invocation to get only the new output starting from here.
W0104 07:18:17.939] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0104 07:18:18.009] scp: /var/log/fluentd.log*: No such file or directory
W0104 07:18:18.009] scp: /var/log/kubelet.cov*: No such file or directory
W0104 07:18:18.009] scp: /var/log/startupscript.log*: No such file or directory
W0104 07:18:18.017] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0104 07:18:18.133] Dumping logs from nodes to GCS directly at 'gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1213350622565240832/artifacts' using logexporter
I0104 07:18:18.134] Detecting nodes in the cluster
I0104 07:18:22.971] namespace/logexporter created
I0104 07:18:23.008] secret/google-service-account created
I0104 07:18:23.052] daemonset.apps/logexporter created
W0104 07:18:24.172] CommandException: One or more URLs matched no objects.
W0104 07:18:40.412] CommandException: One or more URLs matched no objects.
W0104 07:18:47.021] scp: /var/log/glbc.log*: No such file or directory
W0104 07:18:47.022] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0104 07:18:47.090] scp: /var/log/fluentd.log*: No such file or directory
W0104 07:18:47.091] scp: /var/log/kubelet.cov*: No such file or directory
W0104 07:18:47.092] scp: /var/log/startupscript.log*: No such file or directory
W0104 07:18:47.098] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0104 07:18:47.212] Skipping dumping of node logs
W0104 07:18:47.313] 2020/01/04 07:18:47 process.go:155: Step './test/kubemark/master-log-dump.sh /workspace/_artifacts' finished in 1m18.465048203s
W0104 07:18:47.313] 2020/01/04 07:18:47 process.go:153: Running: ./test/kubemark/stop-kubemark.sh
I0104 07:18:56.905] Successfully listed marker files for successful nodes
I0104 07:19:13.274] Successfully listed marker files for successful nodes
I0104 07:19:13.799] Fetching logs from logexporter-2f46s running on kubemark-5000-minion-group-0bc5
... skipping 236 lines ...
W0104 07:27:40.459] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/zones/us-east1-b/instances/kubemark-5000-kubemark-master].
I0104 07:27:41.041] Deleting firewall rules remaining in network kubemark-5000: kubemark-5000-kubemark-default-internal-master
I0104 07:27:41.041] kubemark-5000-kubemark-default-internal-node
I0104 07:27:41.042] kubemark-5000-kubemark-master-etcd
I0104 07:27:41.042] kubemark-5000-kubemark-master-https
I0104 07:27:41.042] kubemark-5000-kubemark-minion-all
W0104 07:27:46.496] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W0104 07:27:46.497]  - The resource 'projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-master-https' is not ready
W0104 07:27:46.497] 
W0104 07:27:46.751] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W0104 07:27:46.751]  - The resource 'projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-master-etcd' is not ready
W0104 07:27:46.751] 
W0104 07:27:47.033] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-default-internal-master].
W0104 07:27:47.767] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W0104 07:27:47.767]  - The resource 'projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-minion-all' is not ready
W0104 07:27:47.768] 
W0104 07:27:50.982] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-default-internal-node].
W0104 07:27:52.169] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-master-etcd].
W0104 07:27:52.616] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-master-https].
W0104 07:27:52.724] Failed to delete firewall rules.
W0104 07:27:54.297] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-minion-all].
W0104 07:27:54.436] Failed to delete firewall rules.
I0104 07:27:55.450] Deleting custom subnet...
W0104 07:27:56.643] ERROR: (gcloud.compute.networks.subnets.delete) Could not fetch resource:
W0104 07:27:56.644]  - The subnetwork resource 'projects/kubemark-scalability-testing/regions/us-east1/subnetworks/kubemark-5000-custom-subnet' is already being used by 'projects/kubemark-scalability-testing/regions/us-east1/addresses/kubemark-5000-kubemark-master-internal-ip'
W0104 07:27:56.644] 
W0104 07:27:58.673] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/regions/us-east1/addresses/kubemark-5000-kubemark-master-ip].
W0104 07:28:00.307] ERROR: (gcloud.compute.networks.delete) Could not fetch resource:
W0104 07:28:00.308]  - The network resource 'projects/kubemark-scalability-testing/global/networks/kubemark-5000' is already being used by 'projects/kubemark-scalability-testing/regions/us-east1/subnetworks/kubemark-5000-custom-subnet'
W0104 07:28:00.308] 
I0104 07:28:00.409] Failed to delete network 'kubemark-5000'. Listing firewall-rules:
W0104 07:28:01.271] 
W0104 07:28:01.271] To show all fields of the firewall, please show in JSON format: --format=json
W0104 07:28:01.272] To show all fields in table format, please see the examples in --help.
W0104 07:28:01.272] 
W0104 07:28:01.557] W0104 07:28:01.557007   36602 loader.go:223] Config not found: /go/src/k8s.io/kubernetes/kubernetes/test/kubemark/resources/kubeconfig.kubemark
W0104 07:28:01.737] W0104 07:28:01.737097   36651 loader.go:223] Config not found: /go/src/k8s.io/kubernetes/kubernetes/test/kubemark/resources/kubeconfig.kubemark
... skipping 17 lines ...
I0104 07:28:11.848] Property "users.kubemark-scalability-testing_kubemark-5000-kubemark-basic-auth" unset.
I0104 07:28:12.048] Property "contexts.kubemark-scalability-testing_kubemark-5000-kubemark" unset.
I0104 07:28:12.058] Cleared config for kubemark-scalability-testing_kubemark-5000-kubemark from /workspace/.kube/config
I0104 07:28:12.058] Done
W0104 07:28:12.088] 2020/01/04 07:28:12 process.go:155: Step './test/kubemark/stop-kubemark.sh' finished in 9m24.848419857s
W0104 07:28:12.088] 2020/01/04 07:28:12 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0104 07:28:12.089] 2020/01/04 07:28:12 main.go:319: Something went wrong: encountered 1 errors: [error during /go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1213350622565240832 --nodes=5000 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml --testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml --testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml --testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml --testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml --testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml: exit status 1]
W0104 07:28:12.089] Traceback (most recent call last):
W0104 07:28:12.090]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 778, in <module>
W0104 07:28:12.090]     main(parse_args())
W0104 07:28:12.090]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 626, in main
W0104 07:28:12.090]     mode.start(runner_args)
W0104 07:28:12.091]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0104 07:28:12.091]     check_env(env, self.command, *args)
W0104 07:28:12.091]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0104 07:28:12.091]     subprocess.check_call(cmd, env=env)
W0104 07:28:12.091]   File "/usr/lib/python2.7/subprocess.py", line 190, in check_call
W0104 07:28:12.091]     raise CalledProcessError(retcode, cmd)
W0104 07:28:12.093] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--provider=gce', '--cluster=kubemark-5000', '--gcp-network=kubemark-5000', '--extract=ci/latest', '--gcp-node-image=gci', '--gcp-node-size=n1-standard-8', '--gcp-nodes=84', '--gcp-project=kubemark-scalability-testing', '--gcp-zone=us-east1-b', '--kubemark', '--kubemark-nodes=5000', '--test_args=--ginkgo.focus=xxxx', '--test-cmd=/go/src/k8s.io/perf-tests/run-e2e.sh', '--test-cmd-args=cluster-loader2', '--test-cmd-args=--experimental-gcp-snapshot-prometheus-disk=true', '--test-cmd-args=--experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1213350622565240832', '--test-cmd-args=--nodes=5000', '--test-cmd-args=--provider=kubemark', '--test-cmd-args=--report-dir=/workspace/_artifacts', '--test-cmd-args=--testconfig=testing/density/config.yaml', '--test-cmd-args=--testconfig=testing/load/config.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/enable_restart_count_check.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml', '--test-cmd-name=ClusterLoaderV2', '--timeout=1080m', '--logexporter-gcs-path=gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1213350622565240832/artifacts')' returned non-zero exit status 1
E0104 07:28:12.093] Command failed
I0104 07:28:12.093] process 509 exited with code 1 after 41.0m
E0104 07:28:12.093] FAIL: ci-kubernetes-kubemark-gce-scale
I0104 07:28:12.094] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0104 07:28:12.690] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0104 07:28:12.760] process 37162 exited with code 0 after 0.0m
I0104 07:28:12.760] Call:  gcloud config get-value account
I0104 07:28:13.147] process 37175 exited with code 0 after 0.0m
I0104 07:28:13.148] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
... skipping 21 lines ...