This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 17 succeeded
Started2020-01-03 18:44
Elapsed46m7s
Revision
Buildergke-prow-ssd-pool-1a225945-86t9
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/8620f1fb-8ca2-41ea-956b-dfcbe98a124f/targets/test'}}
podf01cd776-2e58-11ea-a07b-c6eb1bf16817
resultstorehttps://source.cloud.google.com/results/invocations/8620f1fb-8ca2-41ea-956b-dfcbe98a124f/targets/test
infra-commit06030737a
job-versionv1.18.0-alpha.1.309+ce2102f3637134
podf01cd776-2e58-11ea-a07b-c6eb1bf16817
repok8s.io/kubernetes
repo-commitce2102f3637134519ec189f096da5277af6072a6
repos{u'k8s.io/kubernetes': u'master', u'k8s.io/perf-tests': u'master'}
revisionv1.18.0-alpha.1.309+ce2102f3637134

Test Failures


ClusterLoaderV2 15m31s

error during /go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1213169164328374272 --nodes=5000 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml --testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml --testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml --testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml --testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml --testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 17 Passed Tests

Error lines from build-log.txt

... skipping 429 lines ...
W0103 18:51:20.385] Trying to find master named 'kubemark-5000-master'
W0103 18:51:20.385] Looking for address 'kubemark-5000-master-ip'
W0103 18:51:21.214] Looking for address 'kubemark-5000-master-internal-ip'
I0103 18:51:22.016] Waiting up to 300 seconds for cluster initialization.
I0103 18:51:22.016] 
I0103 18:51:22.016]   This will continually check to see if the API for kubernetes is reachable.
I0103 18:51:22.017]   This may time out if there was some uncaught error during start up.
I0103 18:51:22.017] 
W0103 18:51:22.117] Using master: kubemark-5000-master (external IP: 35.237.157.213; internal IP: 10.40.0.2)
I0103 18:51:22.218] Kubernetes cluster created.
I0103 18:51:22.324] Cluster "kubemark-scalability-testing_kubemark-5000" set.
I0103 18:51:22.470] User "kubemark-scalability-testing_kubemark-5000" set.
I0103 18:51:22.629] Context "kubemark-scalability-testing_kubemark-5000" created.
... skipping 102 lines ...
I0103 18:52:18.124] kubemark-5000-minion-group-x88q   Ready                      <none>   22s   v1.18.0-alpha.1.309+ce2102f3637134
I0103 18:52:18.125] kubemark-5000-minion-group-xb2q   Ready                      <none>   18s   v1.18.0-alpha.1.309+ce2102f3637134
I0103 18:52:18.125] kubemark-5000-minion-group-xp1g   Ready                      <none>   21s   v1.18.0-alpha.1.309+ce2102f3637134
I0103 18:52:18.125] kubemark-5000-minion-group-znd2   Ready                      <none>   20s   v1.18.0-alpha.1.309+ce2102f3637134
I0103 18:52:18.125] kubemark-5000-minion-heapster     Ready                      <none>   35s   v1.18.0-alpha.1.309+ce2102f3637134
I0103 18:52:18.418] Validate output:
I0103 18:52:18.720] NAME                 STATUS    MESSAGE             ERROR
I0103 18:52:18.721] scheduler            Healthy   ok                  
I0103 18:52:18.721] controller-manager   Healthy   ok                  
I0103 18:52:18.721] etcd-0               Healthy   {"health":"true"}   
I0103 18:52:18.721] etcd-1               Healthy   {"health":"true"}   
I0103 18:52:18.728] Cluster validation succeeded
W0103 18:52:18.829] Done, listing cluster services:
... skipping 219 lines ...
W0103 18:54:50.222] Trying to find master named 'kubemark-5000-kubemark-master'
W0103 18:54:50.223] Looking for address 'kubemark-5000-kubemark-master-ip'
W0103 18:54:51.125] Looking for address 'kubemark-5000-kubemark-master-internal-ip'
I0103 18:54:51.950] Waiting up to 300 seconds for cluster initialization.
I0103 18:54:51.950] 
I0103 18:54:51.951]   This will continually check to see if the API for kubernetes is reachable.
I0103 18:54:51.951]   This may time out if there was some uncaught error during start up.
I0103 18:54:51.951] 
I0103 18:55:26.168] ............Kubernetes cluster created.
W0103 18:55:26.268] Using master: kubemark-5000-kubemark-master (external IP: 35.243.232.192; internal IP: 10.40.3.216)
I0103 18:55:26.369] Cluster "kubemark-scalability-testing_kubemark-5000-kubemark" set.
I0103 18:55:26.489] User "kubemark-scalability-testing_kubemark-5000-kubemark" set.
I0103 18:55:26.669] Context "kubemark-scalability-testing_kubemark-5000-kubemark" created.
... skipping 19 lines ...
I0103 18:55:51.297] Found 0 Nodes, allowing additional 2 iterations for other Nodes to join.
I0103 18:55:51.298] Waiting for 1 ready nodes. 0 ready nodes, 1 registered. Retrying.
I0103 18:56:06.592] Found 1 node(s).
I0103 18:56:06.870] NAME                            STATUS                     ROLES    AGE   VERSION
I0103 18:56:06.871] kubemark-5000-kubemark-master   Ready,SchedulingDisabled   <none>   21s   v1.18.0-alpha.1.309+ce2102f3637134
I0103 18:56:07.167] Validate output:
I0103 18:56:07.438] NAME                 STATUS    MESSAGE             ERROR
I0103 18:56:07.438] controller-manager   Healthy   ok                  
I0103 18:56:07.438] scheduler            Healthy   ok                  
I0103 18:56:07.439] etcd-1               Healthy   {"health":"true"}   
I0103 18:56:07.439] etcd-0               Healthy   {"health":"true"}   
I0103 18:56:07.445] Cluster validation succeeded
W0103 18:56:07.545] Done, listing cluster services:
... skipping 5147 lines ...
W0103 19:02:05.690] I0103 19:02:05.689635   29076 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/prometheus-serviceMonitor.yaml
W0103 19:02:05.729] I0103 19:02:05.729215   29076 prometheus.go:201] Exposing kube-apiserver metrics in kubemark cluster
W0103 19:02:05.876] I0103 19:02:05.876378   29076 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-endpoints.yaml
W0103 19:02:05.916] I0103 19:02:05.916319   29076 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-service.yaml
W0103 19:02:05.956] I0103 19:02:05.956092   29076 framework.go:189] Applying /go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-serviceMonitor.yaml
W0103 19:02:05.997] I0103 19:02:05.996670   29076 prometheus.go:277] Waiting for Prometheus stack to become healthy...
W0103 19:02:36.035] W0103 19:02:36.035565   29076 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 19:03:06.036] W0103 19:03:06.036009   29076 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 19:03:36.036] W0103 19:03:36.036116   29076 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 19:04:06.041] W0103 19:04:06.041314   29076 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 19:04:36.036] W0103 19:04:36.035552   29076 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 19:05:06.036] W0103 19:05:06.035911   29076 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 19:05:36.036] W0103 19:05:36.035607   29076 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 19:06:06.042] W0103 19:06:06.041678   29076 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 19:06:36.036] W0103 19:06:36.035635   29076 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 19:07:06.036] W0103 19:07:06.036100   29076 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 19:07:36.035] W0103 19:07:36.035549   29076 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 19:08:06.036] W0103 19:08:06.035803   29076 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 19:08:36.039] W0103 19:08:36.039389   29076 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 19:09:06.036] W0103 19:09:06.036387   29076 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 19:09:36.036] W0103 19:09:36.036224   29076 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 19:10:06.036] W0103 19:10:06.036259   29076 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 19:10:36.036] W0103 19:10:36.035714   29076 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 19:11:06.036] W0103 19:11:06.036442   29076 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 19:11:36.036] W0103 19:11:36.035709   29076 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 19:12:06.035] W0103 19:12:06.035439   29076 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 19:12:36.036] W0103 19:12:36.036129   29076 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 19:13:06.037] W0103 19:13:06.036688   29076 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 19:13:36.036] W0103 19:13:36.036353   29076 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 19:14:06.035] W0103 19:14:06.035082   29076 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 19:14:36.036] W0103 19:14:36.036302   29076 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 19:15:06.037] W0103 19:15:06.037075   29076 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 19:15:36.039] W0103 19:15:36.038723   29076 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 19:16:06.036] W0103 19:16:06.035919   29076 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 19:16:36.036] W0103 19:16:36.035798   29076 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 19:17:06.039] W0103 19:17:06.038560   29076 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 19:17:06.076] W0103 19:17:06.076129   29076 util.go:59] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090)
W0103 19:17:06.077] I0103 19:17:06.076159   29076 prometheus.go:325] Dumping monitoring/prometheus-k8s events...
W0103 19:17:06.114] I0103 19:17:06.113039   29076 prometheus.go:336] {
W0103 19:17:06.114]   "metadata": {
W0103 19:17:06.115]     "selfLink": "/api/v1/namespaces/monitoring/events",
W0103 19:17:06.115]     "resourceVersion": "74163"
W0103 19:17:06.115]   },
... skipping 57 lines ...
W0103 19:17:06.126]       "eventTime": null,
W0103 19:17:06.126]       "reportingComponent": "",
W0103 19:17:06.127]       "reportingInstance": ""
W0103 19:17:06.127]     }
W0103 19:17:06.127]   ]
W0103 19:17:06.127] }
W0103 19:17:06.127] F0103 19:17:06.113069   29076 clusterloader.go:248] Error while setting up prometheus stack: timed out waiting for the condition
W0103 19:17:06.140] 2020/01/03 19:17:06 process.go:155: Step '/go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1213169164328374272 --nodes=5000 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml --testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml --testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml --testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml --testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml --testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml' finished in 15m31.090252698s
W0103 19:17:06.141] 2020/01/03 19:17:06 e2e.go:531: Dumping logs from nodes to GCS directly at path: gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1213169164328374272/artifacts
W0103 19:17:06.142] 2020/01/03 19:17:06 process.go:153: Running: ./cluster/log-dump/log-dump.sh /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1213169164328374272/artifacts
W0103 19:17:06.144] 2020/01/03 19:17:06 process.go:153: Running: ./test/kubemark/master-log-dump.sh /workspace/_artifacts
W0103 19:17:06.231] Trying to find master named 'kubemark-5000-master'
W0103 19:17:06.231] Looking for address 'kubemark-5000-master-internal-ip'
... skipping 22 lines ...
W0103 19:17:47.734] 
W0103 19:17:47.735] Specify --start=47726 in the next get-serial-port-output invocation to get only the new output starting from here.
W0103 19:17:53.892] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0103 19:17:53.960] scp: /var/log/fluentd.log*: No such file or directory
W0103 19:17:53.961] scp: /var/log/kubelet.cov*: No such file or directory
W0103 19:17:53.961] scp: /var/log/startupscript.log*: No such file or directory
W0103 19:17:53.967] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0103 19:17:54.067] Dumping logs from nodes to GCS directly at 'gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1213169164328374272/artifacts' using logexporter
I0103 19:17:54.068] Detecting nodes in the cluster
I0103 19:17:59.596] namespace/logexporter created
I0103 19:17:59.635] secret/google-service-account created
I0103 19:17:59.674] daemonset.apps/logexporter created
W0103 19:18:00.686] CommandException: One or more URLs matched no objects.
W0103 19:18:16.898] CommandException: One or more URLs matched no objects.
W0103 19:18:22.873] scp: /var/log/glbc.log*: No such file or directory
W0103 19:18:22.873] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0103 19:18:22.942] scp: /var/log/fluentd.log*: No such file or directory
W0103 19:18:22.942] scp: /var/log/kubelet.cov*: No such file or directory
W0103 19:18:22.942] scp: /var/log/startupscript.log*: No such file or directory
W0103 19:18:22.947] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0103 19:18:23.034] 2020/01/03 19:18:23 process.go:155: Step './test/kubemark/master-log-dump.sh /workspace/_artifacts' finished in 1m16.890232371s
W0103 19:18:23.034] 2020/01/03 19:18:23 process.go:153: Running: ./test/kubemark/stop-kubemark.sh
I0103 19:18:23.135] Skipping dumping of node logs
I0103 19:18:33.252] Successfully listed marker files for successful nodes
I0103 19:18:49.481] Successfully listed marker files for successful nodes
I0103 19:18:49.937] Fetching logs from logexporter-2k9rx running on kubemark-5000-minion-group-qkqh
... skipping 267 lines ...
I0103 19:29:05.542] Cleared config for kubemark-scalability-testing_kubemark-5000 from /go/src/k8s.io/kubernetes/kubernetes/test/kubemark/resources/kubeconfig.kubemark
I0103 19:29:05.542] Done
W0103 19:29:05.563] W0103 19:29:05.536517   37311 loader.go:223] Config not found: /go/src/k8s.io/kubernetes/kubernetes/test/kubemark/resources/kubeconfig.kubemark
W0103 19:29:05.563] W0103 19:29:05.536710   37311 loader.go:223] Config not found: /go/src/k8s.io/kubernetes/kubernetes/test/kubemark/resources/kubeconfig.kubemark
W0103 19:29:05.563] 2020/01/03 19:29:05 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 9m34.56056348s
W0103 19:29:05.564] 2020/01/03 19:29:05 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0103 19:29:05.565] 2020/01/03 19:29:05 main.go:319: Something went wrong: encountered 1 errors: [error during /go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1213169164328374272 --nodes=5000 --provider=kubemark --report-dir=/workspace/_artifacts --testconfig=testing/density/config.yaml --testconfig=testing/load/config.yaml --testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml --testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml --testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml --testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml --testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml --testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml: exit status 1]
W0103 19:29:05.565] Traceback (most recent call last):
W0103 19:29:05.565]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 778, in <module>
W0103 19:29:05.565]     main(parse_args())
W0103 19:29:05.566]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 626, in main
W0103 19:29:05.566]     mode.start(runner_args)
W0103 19:29:05.566]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0103 19:29:05.566]     check_env(env, self.command, *args)
W0103 19:29:05.567]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0103 19:29:05.567]     subprocess.check_call(cmd, env=env)
W0103 19:29:05.567]   File "/usr/lib/python2.7/subprocess.py", line 190, in check_call
W0103 19:29:05.567]     raise CalledProcessError(retcode, cmd)
W0103 19:29:05.569] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--provider=gce', '--cluster=kubemark-5000', '--gcp-network=kubemark-5000', '--extract=ci/latest', '--gcp-node-image=gci', '--gcp-node-size=n1-standard-8', '--gcp-nodes=84', '--gcp-project=kubemark-scalability-testing', '--gcp-zone=us-east1-b', '--kubemark', '--kubemark-nodes=5000', '--test_args=--ginkgo.focus=xxxx', '--test-cmd=/go/src/k8s.io/perf-tests/run-e2e.sh', '--test-cmd-args=cluster-loader2', '--test-cmd-args=--experimental-gcp-snapshot-prometheus-disk=true', '--test-cmd-args=--experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-1213169164328374272', '--test-cmd-args=--nodes=5000', '--test-cmd-args=--provider=kubemark', '--test-cmd-args=--report-dir=/workspace/_artifacts', '--test-cmd-args=--testconfig=testing/density/config.yaml', '--test-cmd-args=--testconfig=testing/load/config.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/enable_prometheus_api_responsiveness.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/enable_restart_count_check.yaml', '--test-cmd-args=--testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_configmaps.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_daemonsets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_jobs.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_secrets.yaml', '--test-cmd-args=--testoverrides=./testing/load/experimental/overrides/enable_statefulsets.yaml', '--test-cmd-name=ClusterLoaderV2', '--timeout=1080m', '--logexporter-gcs-path=gs://kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale/1213169164328374272/artifacts')' returned non-zero exit status 1
E0103 19:29:05.569] Command failed
I0103 19:29:05.570] process 512 exited with code 1 after 43.0m
E0103 19:29:05.570] FAIL: ci-kubernetes-kubemark-gce-scale
I0103 19:29:05.570] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0103 19:29:06.080] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0103 19:29:06.126] process 37323 exited with code 0 after 0.0m
I0103 19:29:06.126] Call:  gcloud config get-value account
I0103 19:29:06.440] process 37336 exited with code 0 after 0.0m
I0103 19:29:06.441] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
... skipping 21 lines ...