This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 17 succeeded
Started2020-01-08 15:43
Elapsed45m46s
Revision
Buildergke-prow-default-pool-cf4891d4-sgj5
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/bdcc0dda-39fb-4930-bcfb-cf6d663782b4/targets/test'}}
pod4cf8eb42-322d-11ea-9709-02f27a93e62e
resultstorehttps://source.cloud.google.com/results/invocations/bdcc0dda-39fb-4930-bcfb-cf6d663782b4/targets/test
infra-commit7b67e64c7
job-versionv1.18.0-alpha.1.471+6c677b52a1af70
pod4cf8eb42-322d-11ea-9709-02f27a93e62e
repok8s.io/kubernetes
repo-commit6c677b52a1af704f8101e71ca860fc6d8191314b
repos{u'k8s.io/kubernetes': u'master', u'k8s.io/perf-tests': u'master'}
revisionv1.18.0-alpha.1.471+6c677b52a1af70

Test Failures


ClusterLoaderV2 15m28s

error during /go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1214935242213691395 --nodes=5000 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml --testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml --testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml --testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml --testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml --testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 17 Passed Tests

Error lines from build-log.txt

... skipping 427 lines ...
W0108 15:50:14.977] Looking for address 'kubemark-5000-master-ip'
W0108 15:50:16.932] Looking for address 'kubemark-5000-master-internal-ip'
W0108 15:50:18.044] Using master: kubemark-5000-master (external IP: 35.237.141.204; internal IP: 10.40.0.2)
I0108 15:50:18.144] Waiting up to 300 seconds for cluster initialization.
I0108 15:50:18.145] 
I0108 15:50:18.145]   This will continually check to see if the API for kubernetes is reachable.
I0108 15:50:18.145]   This may time out if there was some uncaught error during start up.
I0108 15:50:18.145] 
I0108 15:50:18.186] Kubernetes cluster created.
I0108 15:50:18.391] Cluster "kubemark-scalability-testing_kubemark-5000" set.
I0108 15:50:18.599] User "kubemark-scalability-testing_kubemark-5000" set.
I0108 15:50:18.816] Context "kubemark-scalability-testing_kubemark-5000" created.
I0108 15:50:19.012] Switched to context "kubemark-scalability-testing_kubemark-5000".
... skipping 100 lines ...
I0108 15:51:17.877] kubemark-5000-minion-group-z4xv   Ready                      <none>   20s   v1.18.0-alpha.1.471+6c677b52a1af70
I0108 15:51:17.878] kubemark-5000-minion-group-zpmc   Ready                      <none>   24s   v1.18.0-alpha.1.471+6c677b52a1af70
I0108 15:51:17.878] kubemark-5000-minion-group-zx33   Ready                      <none>   22s   v1.18.0-alpha.1.471+6c677b52a1af70
I0108 15:51:17.878] kubemark-5000-minion-group-zx5t   Ready                      <none>   20s   v1.18.0-alpha.1.471+6c677b52a1af70
I0108 15:51:17.878] kubemark-5000-minion-heapster     Ready                      <none>   39s   v1.18.0-alpha.1.471+6c677b52a1af70
I0108 15:51:18.329] Validate output:
I0108 15:51:18.776] NAME                 STATUS    MESSAGE             ERROR
I0108 15:51:18.776] etcd-1               Healthy   {"health":"true"}   
I0108 15:51:18.776] scheduler            Healthy   ok                  
I0108 15:51:18.776] controller-manager   Healthy   ok                  
I0108 15:51:18.777] etcd-0               Healthy   {"health":"true"}   
I0108 15:51:18.782] Cluster validation succeeded
W0108 15:51:18.882] Done, listing cluster services:
... skipping 220 lines ...
W0108 15:54:28.130] Looking for address 'kubemark-5000-kubemark-master-ip'
W0108 15:54:28.957] Looking for address 'kubemark-5000-kubemark-master-internal-ip'
W0108 15:54:29.901] Using master: kubemark-5000-kubemark-master (external IP: 35.237.157.213; internal IP: 10.40.3.216)
I0108 15:54:30.002] Waiting up to 300 seconds for cluster initialization.
I0108 15:54:30.002] 
I0108 15:54:30.002]   This will continually check to see if the API for kubernetes is reachable.
I0108 15:54:30.003]   This may time out if there was some uncaught error during start up.
I0108 15:54:30.003] 
I0108 15:54:37.066] .Kubernetes cluster created.
I0108 15:54:37.235] Cluster "kubemark-scalability-testing_kubemark-5000-kubemark" set.
I0108 15:54:37.398] User "kubemark-scalability-testing_kubemark-5000-kubemark" set.
I0108 15:54:37.555] Context "kubemark-scalability-testing_kubemark-5000-kubemark" created.
I0108 15:54:37.702] Switched to context "kubemark-scalability-testing_kubemark-5000-kubemark".
... skipping 22 lines ...
I0108 15:55:18.063] NAME                            STATUS                        ROLES    AGE   VERSION
I0108 15:55:18.064] kubemark-5000-kubemark-master   NotReady,SchedulingDisabled   <none>   20s   v1.18.0-alpha.1.471+6c677b52a1af70
I0108 15:55:18.068] Found 1 node(s).
I0108 15:55:18.388] NAME                            STATUS                        ROLES    AGE   VERSION
I0108 15:55:18.388] kubemark-5000-kubemark-master   NotReady,SchedulingDisabled   <none>   20s   v1.18.0-alpha.1.471+6c677b52a1af70
I0108 15:55:18.731] Validate output:
I0108 15:55:19.033] NAME                 STATUS    MESSAGE             ERROR
I0108 15:55:19.034] scheduler            Healthy   ok                  
I0108 15:55:19.034] controller-manager   Healthy   ok                  
I0108 15:55:19.034] etcd-1               Healthy   {"health":"true"}   
I0108 15:55:19.034] etcd-0               Healthy   {"health":"true"}   
I0108 15:55:19.037] Cluster validation encountered some problems, but cluster should be in working order
W0108 15:55:19.139] ...ignoring non-fatal errors in validate-cluster
W0108 15:55:19.139] Done, listing cluster services:
W0108 15:55:19.139] 
I0108 15:55:19.355] Kubernetes master is running at https://35.237.157.213
I0108 15:55:19.355] 
I0108 15:55:19.356] To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
I0108 15:55:19.361] 
... skipping 5142 lines ...
W0108 16:01:40.281] I0108 16:01:40.281498   29476 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/prometheus-serviceMonitor.yaml
W0108 16:01:40.319] I0108 16:01:40.319470   29476 prometheus.go:201] Exposing kube-apiserver metrics in kubemark cluster
W0108 16:01:40.466] I0108 16:01:40.466526   29476 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-endpoints.yaml
W0108 16:01:40.505] I0108 16:01:40.505586   29476 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-service.yaml
W0108 16:01:40.543] I0108 16:01:40.543261   29476 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-serviceMonitor.yaml
W0108 16:01:40.581] I0108 16:01:40.581598   29476 prometheus.go:277] Waiting for Prometheus stack to become healthy...
W0108 16:02:10.619] W0108 16:02:10.619612   29476 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0108 16:02:40.620] W0108 16:02:40.620356   29476 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0108 16:03:10.620] W0108 16:03:10.620586   29476 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0108 16:03:40.621] W0108 16:03:40.621370   29476 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0108 16:04:10.620] W0108 16:04:10.620444   29476 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0108 16:04:40.621] W0108 16:04:40.621590   29476 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0108 16:05:10.620] W0108 16:05:10.620480   29476 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0108 16:05:40.622] W0108 16:05:40.620337   29476 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0108 16:06:10.620] W0108 16:06:10.620414   29476 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0108 16:06:40.620] W0108 16:06:40.620083   29476 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0108 16:07:10.620] W0108 16:07:10.619756   29476 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0108 16:07:40.620] W0108 16:07:40.620174   29476 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0108 16:08:10.620] W0108 16:08:10.620597   29476 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0108 16:08:40.620] W0108 16:08:40.620433   29476 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0108 16:09:10.620] W0108 16:09:10.620326   29476 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0108 16:09:40.620] W0108 16:09:40.619931   29476 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0108 16:10:10.621] W0108 16:10:10.620976   29476 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0108 16:10:40.620] W0108 16:10:40.620474   29476 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0108 16:11:10.620] W0108 16:11:10.620610   29476 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0108 16:11:40.620] W0108 16:11:40.620413   29476 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0108 16:12:10.621] W0108 16:12:10.620732   29476 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0108 16:12:40.622] W0108 16:12:40.622283   29476 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0108 16:13:10.620] W0108 16:13:10.620439   29476 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0108 16:13:40.620] W0108 16:13:40.620420   29476 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0108 16:14:10.624] W0108 16:14:10.624077   29476 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0108 16:14:40.619] W0108 16:14:40.619647   29476 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0108 16:15:10.620] W0108 16:15:10.620465   29476 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0108 16:15:40.620] W0108 16:15:40.620356   29476 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0108 16:16:10.620] W0108 16:16:10.620180   29476 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0108 16:16:40.620] W0108 16:16:40.619851   29476 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0108 16:16:40.657] W0108 16:16:40.657241   29476 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0108 16:16:40.657] I0108 16:16:40.657275   29476 prometheus.go:325] Dumping monitoring/prometheus-k8s events...
W0108 16:16:40.694] I0108 16:16:40.693998   29476 prometheus.go:336] {
W0108 16:16:40.694]   "metadata": {
W0108 16:16:40.694]     "selfLink": "/api/v1/namespaces/monitoring/events",
W0108 16:16:40.694]     "resourceVersion": "74684"
W0108 16:16:40.694]   },
... skipping 57 lines ...
W0108 16:16:40.703]       "eventTime": null,
W0108 16:16:40.703]       "reportingComponent": "",
W0108 16:16:40.703]       "reportingInstance": ""
W0108 16:16:40.703]     }
W0108 16:16:40.703]   ]
W0108 16:16:40.703] }
W0108 16:16:40.703] F0108 16:16:40.694035   29476 clusterloader.go:248] Error while setting up prometheus stack: timed out waiting for the condition
W0108 16:16:40.720] 2020/01/08 16:16:40 process.go:155: Step '/go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1214935242213691395 --nodes=5000 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml --testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml --testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml --testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml --testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml --testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml' finished in 15m28.387663988s
W0108 16:16:40.723] 2020/01/08 16:16:40 e2e.go:531: Dumping logs from nodes to GCS directly at path: gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1214935242213691395/artifacts
W0108 16:16:40.724] 2020/01/08 16:16:40 process.go:153: Running: ./test/kubemark/master-log-dump.sh /workspace/_artifacts
W0108 16:16:40.724] 2020/01/08 16:16:40 process.go:153: Running: ./cluster/log-dump/log-dump.sh /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1214935242213691395/artifacts
W0108 16:16:40.790] Trying to find master named 'kubemark-5000-master'
W0108 16:16:40.791] Looking for address 'kubemark-5000-master-internal-ip'
... skipping 22 lines ...
W0108 16:17:19.305] 
W0108 16:17:19.305] Specify --start=47751 in the next get-serial-port-output invocation to get only the new output starting from here.
W0108 16:17:25.793] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0108 16:17:25.868] scp: /var/log/fluentd.log*: No such file or directory
W0108 16:17:25.868] scp: /var/log/kubelet.cov*: No such file or directory
W0108 16:17:25.868] scp: /var/log/startupscript.log*: No such file or directory
W0108 16:17:25.875] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0108 16:17:25.976] Dumping logs from nodes to GCS directly at 'gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1214935242213691395/artifacts' using logexporter
I0108 16:17:25.976] Detecting nodes in the cluster
I0108 16:17:30.266] namespace/logexporter created
I0108 16:17:30.303] secret/google-service-account created
I0108 16:17:30.341] daemonset.apps/logexporter created
W0108 16:17:31.194] CommandException: One or more URLs matched no objects.
W0108 16:17:47.145] CommandException: One or more URLs matched no objects.
W0108 16:17:52.560] scp: /var/log/glbc.log*: No such file or directory
W0108 16:17:52.560] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0108 16:17:52.628] scp: /var/log/fluentd.log*: No such file or directory
W0108 16:17:52.628] scp: /var/log/kubelet.cov*: No such file or directory
W0108 16:17:52.629] scp: /var/log/startupscript.log*: No such file or directory
W0108 16:17:52.633] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0108 16:17:52.714] 2020/01/08 16:17:52 process.go:155: Step './test/kubemark/master-log-dump.sh /workspace/_artifacts' finished in 1m11.99045134s
W0108 16:17:52.714] 2020/01/08 16:17:52 process.go:153: Running: ./test/kubemark/stop-kubemark.sh
I0108 16:17:52.815] Skipping dumping of node logs
I0108 16:18:03.343] Successfully listed marker files for successful nodes
I0108 16:18:19.826] Successfully listed marker files for successful nodes
I0108 16:18:20.208] Fetching logs from logexporter-264rk running on kubemark-5000-minion-group-c3l0
... skipping 237 lines ...
W0108 16:27:49.953] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/regions/us-east1/routers/kubemark-5000-nat-router].
I0108 16:27:50.844] Deleting firewall rules remaining in network kubemark-5000: kubemark-5000-kubemark-default-internal-master
I0108 16:27:50.845] kubemark-5000-kubemark-default-internal-node
I0108 16:27:50.845] kubemark-5000-kubemark-master-etcd
I0108 16:27:50.845] kubemark-5000-kubemark-master-https
I0108 16:27:50.845] kubemark-5000-kubemark-minion-all
W0108 16:27:54.665] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W0108 16:27:54.665]  - The resource 'projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-master-etcd' is not ready
W0108 16:27:54.666] 
W0108 16:27:55.555] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W0108 16:27:55.556]  - The resource 'projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-master-https' is not ready
W0108 16:27:55.556] 
W0108 16:27:56.542] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W0108 16:27:56.543]  - The resource 'projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-minion-all' is not ready
W0108 16:27:56.543] 
W0108 16:27:59.428] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-master-https].
W0108 16:27:59.511] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-default-internal-master].
W0108 16:28:00.427] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-master-etcd].
W0108 16:28:00.939] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-default-internal-node].
W0108 16:28:01.020] Failed to delete firewall rules.
W0108 16:28:01.167] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/global/firewalls/kubemark-5000-kubemark-minion-all].
I0108 16:28:01.946] Deleting custom subnet...
W0108 16:28:03.054] ERROR: (gcloud.compute.networks.subnets.delete) Could not fetch resource:
W0108 16:28:03.055]  - The subnetwork resource 'projects/kubemark-scalability-testing/regions/us-east1/subnetworks/kubemark-5000-custom-subnet' is already being used by 'projects/kubemark-scalability-testing/regions/us-east1/addresses/kubemark-5000-kubemark-master-internal-ip'
W0108 16:28:03.055] 
W0108 16:28:06.386] ERROR: (gcloud.compute.networks.delete) Could not fetch resource:
W0108 16:28:06.387]  - The network resource 'projects/kubemark-scalability-testing/global/networks/kubemark-5000' is already being used by 'projects/kubemark-scalability-testing/regions/us-east1/subnetworks/kubemark-5000-custom-subnet'
W0108 16:28:06.387] 
I0108 16:28:06.487] Failed to delete network 'kubemark-5000'. Listing firewall-rules:
W0108 16:28:06.984] Deleted [https://www.googleapis.com/compute/v1/projects/kubemark-scalability-testing/regions/us-east1/addresses/kubemark-5000-kubemark-master-ip].
W0108 16:28:07.187] 
W0108 16:28:07.188] To show all fields of the firewall, please show in JSON format: --format=json
W0108 16:28:07.188] To show all fields in table format, please see the examples in --help.
W0108 16:28:07.188] 
W0108 16:28:07.389] W0108 16:28:07.389682   37280 loader.go:223] Config not found: /go/src/k8s.io/kubernetes/kubernetes/test/kubemark/resources/kubeconfig.kubemark
... skipping 18 lines ...
I0108 16:28:18.608] Property "users.kubemark-scalability-testing_kubemark-5000-kubemark-basic-auth" unset.
I0108 16:28:18.740] Property "contexts.kubemark-scalability-testing_kubemark-5000-kubemark" unset.
I0108 16:28:18.743] Cleared config for kubemark-scalability-testing_kubemark-5000-kubemark from /workspace/.kube/config
I0108 16:28:18.743] Done
W0108 16:28:18.781] 2020/01/08 16:28:18 process.go:155: Step './test/kubemark/stop-kubemark.sh' finished in 10m26.032150517s
W0108 16:28:18.781] 2020/01/08 16:28:18 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0108 16:28:18.782] 2020/01/08 16:28:18 main.go:316: Something went wrong: encountered 1 errors: [error during /go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1214935242213691395 --nodes=5000 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml --testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml --testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml --testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml --testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml --testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml: exit status 1]
W0108 16:28:18.782] Traceback (most recent call last):
W0108 16:28:18.783]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 778, in <module>
W0108 16:28:18.783]     main(parse_args())
W0108 16:28:18.783]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 626, in main
W0108 16:28:18.783]     mode.start(runner_args)
W0108 16:28:18.783]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0108 16:28:18.783]     check_env(env, self.command, *args)
W0108 16:28:18.783]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0108 16:28:18.784]     subprocess.check_call(cmd, env=env)
W0108 16:28:18.784]   File "/usr/lib/python2.7/subprocess.py", line 190, in check_call
W0108 16:28:18.784]     raise CalledProcessError(retcode, cmd)
W0108 16:28:18.786] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--provider=gce', '--cluster=kubemark-5000', '--gcp-network=kubemark-5000', '--extract=ci/latest', '--gcp-node-image=gci', '--gcp-node-size=n1-standard-8', '--gcp-nodes=84', '--gcp-project=kubemark-scalability-testing', '--gcp-zone=us-east1-b', '--kubemark', '--kubemark-nodes=5000', '--test_args=--ginkgo.focus=xxxx', '--test-cmd=/go/src/k8s.io/perf-tests/run-e2e.sh', '--test-cmd-args=cluster-loader2', '--test-cmd-args=--experimental-gcp-snapshot-prometheus-disk=true', '--test-cmd-args=--experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1214935242213691395', '--test-cmd-args=--nodes=5000', '--test-cmd-args=--provider=kubemark', '--test-cmd-args=--report-dir=/workspace/_artifacts', '--test-cmd-args=--testconfig=testing/density/config.yaml', '--test-cmd-args=--testconfig=testing/load/config.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/enable_restart_count_check.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml', '--test-cmd-name=ClusterLoaderV2', '--timeout=1080m', '--logexporter-gcs-path=gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1214935242213691395/artifacts')' returned non-zero exit status 1
E0108 16:28:18.786] Command failed
I0108 16:28:18.786] process 498 exited with code 1 after 42.7m
E0108 16:28:18.786] FAIL: ci-kubernetes-kubemark-gce-scale
I0108 16:28:18.787] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0108 16:28:19.288] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0108 16:28:19.339] process 37860 exited with code 0 after 0.0m
I0108 16:28:19.339] Call:  gcloud config get-value account
I0108 16:28:19.650] process 37873 exited with code 0 after 0.0m
I0108 16:28:19.651] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
... skipping 21 lines ...