This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 17 succeeded
Started2019-12-30 05:55
Elapsed43m21s
Revision
Buildergke-prow-ssd-pool-1a225945-5wt3
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/04e2cce2-79bd-46fb-a83a-868978a0f32e/targets/test'}}
podd350c98f-2ac8-11ea-a07b-c6eb1bf16817
resultstorehttps://source.cloud.google.com/results/invocations/04e2cce2-79bd-46fb-a83a-868978a0f32e/targets/test
infra-commit09d9247dc
job-versionv1.18.0-alpha.1.228+6d5302119698c9
podd350c98f-2ac8-11ea-a07b-c6eb1bf16817
repok8s.io/kubernetes
repo-commit6d5302119698c9ccbcb3662e9665a1aa5af29762
repos{u'k8s.io/kubernetes': u'master', u'k8s.io/perf-tests': u'master'}
revisionv1.18.0-alpha.1.228+6d5302119698c9

Test Failures


ClusterLoaderV2 15m29s

error during /go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1211526009040408576 --nodes=5000 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml --testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml --testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml --testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml --testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml --testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 17 Passed Tests

Error lines from build-log.txt

... skipping 429 lines ...
W1230 06:01:30.450] Trying to find master named 'kubemark-5000-master'
W1230 06:01:30.450] Looking for address 'kubemark-5000-master-ip'
W1230 06:01:31.246] Looking for address 'kubemark-5000-master-internal-ip'
I1230 06:01:32.016] Waiting up to 300 seconds for cluster initialization.
I1230 06:01:32.016] 
I1230 06:01:32.016]   This will continually check to see if the API for kubernetes is reachable.
I1230 06:01:32.016]   This may time out if there was some uncaught error during start up.
I1230 06:01:32.016] 
W1230 06:01:32.117] Using master: kubemark-5000-master (external IP: 35.243.232.192; internal IP: 10.40.0.2)
I1230 06:01:32.218] Kubernetes cluster created.
I1230 06:01:32.331] Cluster "kubemark-scalability-testing_kubemark-5000" set.
I1230 06:01:32.573] User "kubemark-scalability-testing_kubemark-5000" set.
I1230 06:01:32.847] Context "kubemark-scalability-testing_kubemark-5000" created.
... skipping 102 lines ...
I1230 06:02:28.539] kubemark-5000-minion-group-xcp9   Ready                      <none>   21s   v1.18.0-alpha.1.228+6d5302119698c9
I1230 06:02:28.540] kubemark-5000-minion-group-xtms   Ready                      <none>   23s   v1.18.0-alpha.1.228+6d5302119698c9
I1230 06:02:28.540] kubemark-5000-minion-group-zklc   Ready                      <none>   24s   v1.18.0-alpha.1.228+6d5302119698c9
I1230 06:02:28.540] kubemark-5000-minion-group-zsv6   Ready                      <none>   19s   v1.18.0-alpha.1.228+6d5302119698c9
I1230 06:02:28.540] kubemark-5000-minion-heapster     Ready                      <none>   39s   v1.18.0-alpha.1.228+6d5302119698c9
I1230 06:02:28.894] Validate output:
I1230 06:02:29.219] NAME                 STATUS    MESSAGE             ERROR
I1230 06:02:29.219] scheduler            Healthy   ok                  
I1230 06:02:29.220] controller-manager   Healthy   ok                  
I1230 06:02:29.220] etcd-1               Healthy   {"health":"true"}   
I1230 06:02:29.220] etcd-0               Healthy   {"health":"true"}   
I1230 06:02:29.238] Cluster validation succeeded
W1230 06:02:29.338] Done, listing cluster services:
... skipping 220 lines ...
W1230 06:05:02.490] Looking for address 'kubemark-5000-kubemark-master-ip'
W1230 06:05:03.264] Looking for address 'kubemark-5000-kubemark-master-internal-ip'
W1230 06:05:04.072] Using master: kubemark-5000-kubemark-master (external IP: 35.237.141.204; internal IP: 10.40.3.216)
I1230 06:05:04.173] Waiting up to 300 seconds for cluster initialization.
I1230 06:05:04.173] 
I1230 06:05:04.174]   This will continually check to see if the API for kubernetes is reachable.
I1230 06:05:04.174]   This may time out if there was some uncaught error during start up.
I1230 06:05:04.174] 
I1230 06:05:36.933] ............Kubernetes cluster created.
I1230 06:05:37.087] Cluster "kubemark-scalability-testing_kubemark-5000-kubemark" set.
I1230 06:05:37.241] User "kubemark-scalability-testing_kubemark-5000-kubemark" set.
I1230 06:05:37.471] Context "kubemark-scalability-testing_kubemark-5000-kubemark" created.
I1230 06:05:37.641] Switched to context "kubemark-scalability-testing_kubemark-5000-kubemark".
... skipping 22 lines ...
I1230 06:06:17.067] NAME                            STATUS                        ROLES    AGE   VERSION
I1230 06:06:17.068] kubemark-5000-kubemark-master   NotReady,SchedulingDisabled   <none>   20s   v1.18.0-alpha.1.228+6d5302119698c9
I1230 06:06:17.072] Found 1 node(s).
I1230 06:06:17.314] NAME                            STATUS                        ROLES    AGE   VERSION
I1230 06:06:17.314] kubemark-5000-kubemark-master   NotReady,SchedulingDisabled   <none>   20s   v1.18.0-alpha.1.228+6d5302119698c9
I1230 06:06:17.581] Validate output:
I1230 06:06:17.820] NAME                 STATUS    MESSAGE             ERROR
I1230 06:06:17.821] controller-manager   Healthy   ok                  
I1230 06:06:17.821] scheduler            Healthy   ok                  
I1230 06:06:17.821] etcd-1               Healthy   {"health":"true"}   
I1230 06:06:17.821] etcd-0               Healthy   {"health":"true"}   
I1230 06:06:17.826] Cluster validation encountered some problems, but cluster should be in working order
W1230 06:06:17.926] ...ignoring non-fatal errors in validate-cluster
W1230 06:06:17.926] Done, listing cluster services:
W1230 06:06:17.927] 
I1230 06:06:18.064] Kubernetes master is running at https://35.237.141.204
I1230 06:06:18.064] 
I1230 06:06:18.064] To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
I1230 06:06:18.069] 
... skipping 5142 lines ...
W1230 06:12:07.038] I1230 06:12:07.038506   29284 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/prometheus-serviceMonitor.yaml
W1230 06:12:07.076] I1230 06:12:07.076025   29284 prometheus.go:201] Exposing kube-apiserver metrics in kubemark cluster
W1230 06:12:07.223] I1230 06:12:07.223040   29284 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-endpoints.yaml
W1230 06:12:07.263] I1230 06:12:07.262907   29284 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-service.yaml
W1230 06:12:07.300] I1230 06:12:07.300660   29284 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-serviceMonitor.yaml
W1230 06:12:07.339] I1230 06:12:07.339526   29284 prometheus.go:277] Waiting for Prometheus stack to become healthy...
W1230 06:12:37.377] W1230 06:12:37.377135   29284 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 06:13:07.377] W1230 06:13:07.376859   29284 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 06:13:37.377] W1230 06:13:37.377107   29284 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 06:14:07.378] W1230 06:14:07.377953   29284 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 06:14:37.377] W1230 06:14:37.377126   29284 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 06:15:07.377] W1230 06:15:07.377344   29284 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 06:15:37.379] W1230 06:15:37.379425   29284 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 06:16:07.378] W1230 06:16:07.377238   29284 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 06:16:37.377] W1230 06:16:37.377273   29284 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 06:17:07.377] W1230 06:17:07.377652   29284 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 06:17:37.377] W1230 06:17:37.377363   29284 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 06:18:07.377] W1230 06:18:07.376897   29284 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 06:18:37.377] W1230 06:18:37.377062   29284 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 06:19:07.378] W1230 06:19:07.377932   29284 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 06:19:37.377] W1230 06:19:37.377541   29284 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 06:20:07.378] W1230 06:20:07.377805   29284 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 06:20:37.379] W1230 06:20:37.378864   29284 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 06:21:07.377] W1230 06:21:07.377177   29284 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 06:21:37.377] W1230 06:21:37.377357   29284 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 06:22:07.377] W1230 06:22:07.376899   29284 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 06:22:37.377] W1230 06:22:37.377750   29284 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 06:23:07.378] W1230 06:23:07.378392   29284 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 06:23:37.378] W1230 06:23:37.377829   29284 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 06:24:07.378] W1230 06:24:07.378132   29284 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 06:24:37.378] W1230 06:24:37.378209   29284 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 06:25:07.378] W1230 06:25:07.378620   29284 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 06:25:37.378] W1230 06:25:37.378074   29284 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 06:26:07.378] W1230 06:26:07.378368   29284 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 06:26:37.377] W1230 06:26:37.377728   29284 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 06:27:07.378] W1230 06:27:07.378050   29284 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 06:27:07.415] W1230 06:27:07.414664   29284 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W1230 06:27:07.415] I1230 06:27:07.414680   29284 prometheus.go:325] Dumping monitoring/prometheus-k8s events...
W1230 06:27:07.451] I1230 06:27:07.451166   29284 prometheus.go:336] {
W1230 06:27:07.451]   "metadata": {
W1230 06:27:07.451]     "selfLink": "/api/v1/namespaces/monitoring/events",
W1230 06:27:07.451]     "resourceVersion": "74598"
W1230 06:27:07.452]   },
... skipping 57 lines ...
W1230 06:27:07.463]       "eventTime": null,
W1230 06:27:07.463]       "reportingComponent": "",
W1230 06:27:07.463]       "reportingInstance": ""
W1230 06:27:07.463]     }
W1230 06:27:07.463]   ]
W1230 06:27:07.463] }
W1230 06:27:07.463] F1230 06:27:07.451189   29284 clusterloader.go:248] Error while setting up prometheus stack: timed out waiting for the condition
W1230 06:27:07.476] 2019/12/30 06:27:07 process.go:155: Step '/go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1211526009040408576 --nodes=5000 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml --testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml --testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml --testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml --testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml --testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml' finished in 15m29.388542461s
W1230 06:27:07.476] 2019/12/30 06:27:07 e2e.go:531: Dumping logs from nodes to GCS directly at path: gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1211526009040408576/artifacts
W1230 06:27:07.477] 2019/12/30 06:27:07 process.go:153: Running: ./cluster/log-dump/log-dump.sh /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1211526009040408576/artifacts
W1230 06:27:07.477] 2019/12/30 06:27:07 process.go:153: Running: ./test/kubemark/master-log-dump.sh /workspace/_artifacts
W1230 06:27:07.536] Trying to find master named 'kubemark-5000-master'
W1230 06:27:07.537] Looking for address 'kubemark-5000-master-internal-ip'
... skipping 22 lines ...
W1230 06:27:43.876] 
W1230 06:27:43.876] Specify --start=47724 in the next get-serial-port-output invocation to get only the new output starting from here.
W1230 06:27:49.737] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W1230 06:27:49.806] scp: /var/log/fluentd.log*: No such file or directory
W1230 06:27:49.806] scp: /var/log/kubelet.cov*: No such file or directory
W1230 06:27:49.807] scp: /var/log/startupscript.log*: No such file or directory
W1230 06:27:49.812] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I1230 06:27:49.913] Dumping logs from nodes to GCS directly at 'gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1211526009040408576/artifacts' using logexporter
I1230 06:27:49.913] Detecting nodes in the cluster
I1230 06:27:53.952] namespace/logexporter created
I1230 06:27:53.989] secret/google-service-account created
I1230 06:27:54.027] daemonset.apps/logexporter created
W1230 06:27:54.857] CommandException: One or more URLs matched no objects.
W1230 06:28:10.815] CommandException: One or more URLs matched no objects.
W1230 06:28:18.069] scp: /var/log/glbc.log*: No such file or directory
W1230 06:28:18.069] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W1230 06:28:18.138] scp: /var/log/fluentd.log*: No such file or directory
W1230 06:28:18.138] scp: /var/log/kubelet.cov*: No such file or directory
W1230 06:28:18.138] scp: /var/log/startupscript.log*: No such file or directory
W1230 06:28:18.143] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W1230 06:28:18.209] 2019/12/30 06:28:18 process.go:155: Step './test/kubemark/master-log-dump.sh /workspace/_artifacts' finished in 1m10.732938965s
W1230 06:28:18.210] 2019/12/30 06:28:18 process.go:153: Running: ./test/kubemark/stop-kubemark.sh
I1230 06:28:18.310] Skipping dumping of node logs
I1230 06:28:26.900] Successfully listed marker files for successful nodes
I1230 06:28:27.295] Fetching logs from logexporter-26n9s running on kubemark-5000-minion-group-rg6t
I1230 06:28:27.299] Fetching logs from logexporter-2bjvr running on kubemark-5000-minion-group-k2t0
... skipping 235 lines ...
W1230 06:36:58.289] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/regions/us-east1/routers/kubemark-5000-nat-router].
I1230 06:36:59.020] Deleting firewall rules remaining in network kubemark-5000: kubemark-5000-kubemark-default-internal-master
I1230 06:36:59.020] kubemark-5000-kubemark-default-internal-node
I1230 06:36:59.021] kubemark-5000-kubemark-master-etcd
I1230 06:36:59.021] kubemark-5000-kubemark-master-https
I1230 06:36:59.021] kubemark-5000-kubemark-minion-all
W1230 06:37:01.800] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W1230 06:37:01.800]  - The resource 'projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-master-etcd' was not found
W1230 06:37:01.801] 
W1230 06:37:02.078] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-master-https].
W1230 06:37:02.712] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-master-etcd].
W1230 06:37:03.503] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-minion-all].
W1230 06:37:07.823] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-default-internal-master].
W1230 06:37:08.230] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-default-internal-node].
W1230 06:37:08.293] Failed to delete firewall rules.
W1230 06:37:08.577] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/regions/us-east1/addresses/kubemark-5000-kubemark-master-ip].
I1230 06:37:09.052] Deleting custom subnet...
W1230 06:37:09.946] ERROR: (gcloud.compute.networks.subnets.delete) Could not fetch resource:
W1230 06:37:09.947]  - The subnetwork resource 'projects/kubemark-scalability-testing/regions/us-east1/subnetworks/kubemark-5000-custom-subnet' is already being used by 'projects/kubemark-scalability-testing/regions/us-east1/addresses/kubemark-5000-kubemark-master-internal-ip'
W1230 06:37:09.947] 
W1230 06:37:11.688] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/regions/us-east1/addresses/kubemark-5000-kubemark-master-internal-ip].
W1230 06:37:13.214] ERROR: (gcloud.compute.networks.delete) Could not fetch resource:
W1230 06:37:13.214]  - The network resource 'projects/kubemark-scalability-testing/global/networks/kubemark-5000' is already being used by 'projects/kubemark-scalability-testing/regions/us-east1/subnetworks/kubemark-5000-custom-subnet'
W1230 06:37:13.214] 
I1230 06:37:13.315] Failed to delete network 'kubemark-5000'. Listing firewall-rules:
W1230 06:37:13.888] 
W1230 06:37:13.888] To show all fields of the firewall, please show in JSON format: --format=json
W1230 06:37:13.888] To show all fields in table format, please see the examples in --help.
W1230 06:37:13.888] 
W1230 06:37:14.090] W1230 06:37:14.090196   36937 loader.go:223] Config not found: /go/src/k8s.io/kubernetes/kubernetes/test/kubemark/resources/kubeconfig.kubemark
W1230 06:37:14.212] W1230 06:37:14.212635   36984 loader.go:223] Config not found: /go/src/k8s.io/kubernetes/kubernetes/test/kubemark/resources/kubeconfig.kubemark
... skipping 16 lines ...
I1230 06:37:18.499] Property "users.kubemark-scalability-testing_kubemark-5000-kubemark-basic-auth" unset.
I1230 06:37:18.629] Property "contexts.kubemark-scalability-testing_kubemark-5000-kubemark" unset.
I1230 06:37:18.633] Cleared config for kubemark-scalability-testing_kubemark-5000-kubemark from /workspace/.kube/config
I1230 06:37:18.634] Done
W1230 06:37:18.654] 2019/12/30 06:37:18 process.go:155: Step './test/kubemark/stop-kubemark.sh' finished in 9m0.426535391s
W1230 06:37:18.655] 2019/12/30 06:37:18 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W1230 06:37:18.656] 2019/12/30 06:37:18 main.go:319: Something went wrong: encountered 1 errors: [error during /go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1211526009040408576 --nodes=5000 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml --testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml --testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml --testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml --testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml --testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml: exit status 1]
W1230 06:37:18.657] Traceback (most recent call last):
W1230 06:37:18.657]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 778, in <module>
W1230 06:37:18.658]     main(parse_args())
W1230 06:37:18.658]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 626, in main
W1230 06:37:18.658]     mode.start(runner_args)
W1230 06:37:18.658]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W1230 06:37:18.658]     check_env(env, self.command, *args)
W1230 06:37:18.658]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W1230 06:37:18.659]     subprocess.check_call(cmd, env=env)
W1230 06:37:18.659]   File "/usr/lib/python2.7/subprocess.py", line 190, in check_call
W1230 06:37:18.659]     raise CalledProcessError(retcode, cmd)
W1230 06:37:18.661] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--provider=gce', '--cluster=kubemark-5000', '--gcp-network=kubemark-5000', '--extract=ci/latest', '--gcp-node-image=gci', '--gcp-node-size=n1-standard-8', '--gcp-nodes=84', '--gcp-project=kubemark-scalability-testing', '--gcp-zone=us-east1-b', '--kubemark', '--kubemark-nodes=5000', '--test_args=--ginkgo.focus=xxxx', '--test-cmd=/go/src/k8s.io/perf-tests/run-e2e.sh', '--test-cmd-args=cluster-loader2', '--test-cmd-args=--experimental-gcp-snapshot-prometheus-disk=true', '--test-cmd-args=--experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1211526009040408576', '--test-cmd-args=--nodes=5000', '--test-cmd-args=--provider=kubemark', '--test-cmd-args=--report-dir=/workspace/_artifacts', '--test-cmd-args=--testconfig=testing/density/config.yaml', '--test-cmd-args=--testconfig=testing/load/config.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/enable_restart_count_check.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml', '--test-cmd-name=ClusterLoaderV2', '--timeout=1080m', '--logexporter-gcs-path=gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1211526009040408576/artifacts')' returned non-zero exit status 1
E1230 06:37:18.661] Command failed
I1230 06:37:18.661] process 497 exited with code 1 after 40.7m
E1230 06:37:18.661] FAIL: ci-kubernetes-kubemark-gce-scale
I1230 06:37:18.662] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W1230 06:37:19.115] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I1230 06:37:19.155] process 37457 exited with code 0 after 0.0m
I1230 06:37:19.156] Call:  gcloud config get-value account
I1230 06:37:19.444] process 37470 exited with code 0 after 0.0m
I1230 06:37:19.444] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
... skipping 21 lines ...