This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 6 succeeded
Started2020-01-03 23:02
Elapsed20m39s
Revision
Buildergke-prow-ssd-pool-1a225945-zhlt
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/03deaf84-b5bf-43ba-9ce6-d4a2e61875d4/targets/test'}}
podd7a0aa3b-2e7c-11ea-a07b-c6eb1bf16817
resultstorehttps://source.cloud.google.com/results/invocations/03deaf84-b5bf-43ba-9ce6-d4a2e61875d4/targets/test
infra-commitb0e571d31
job-versionv1.18.0-alpha.1.311+1c033105ebea36
podd7a0aa3b-2e7c-11ea-a07b-c6eb1bf16817
revisionv1.18.0-alpha.1.311+1c033105ebea36

Test Failures


Up 8m50s

error during ./hack/e2e-internal/e2e-up.sh: exit status 2
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 6 Passed Tests

Error lines from build-log.txt

... skipping 15 lines ...
I0103 23:02:03.737] process 48 exited with code 0 after 0.0m
I0103 23:02:03.738] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0103 23:02:03.739] Root: /workspace
I0103 23:02:03.740] cd to /workspace
I0103 23:02:03.741] Configure environment...
I0103 23:02:03.741] Call:  git show -s --format=format:%ct HEAD
W0103 23:02:03.747] fatal: not a git repository (or any of the parent directories): .git
I0103 23:02:03.749] process 61 exited with code 128 after 0.0m
W0103 23:02:03.750] Unable to print commit date for HEAD
I0103 23:02:03.751] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0103 23:02:04.556] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0103 23:02:04.913] process 62 exited with code 0 after 0.0m
I0103 23:02:04.914] Call:  gcloud config get-value account
... skipping 394 lines ...
W0103 23:07:51.502] NODE_NAMES=bootstrap-e2e-minion-group-163s bootstrap-e2e-minion-group-c3rm bootstrap-e2e-minion-group-gddk
W0103 23:07:51.502] Trying to find master named 'bootstrap-e2e-master'
W0103 23:07:51.503] Looking for address 'bootstrap-e2e-master-ip'
I0103 23:07:52.546] Waiting up to 300 seconds for cluster initialization.
I0103 23:07:52.547] 
I0103 23:07:52.547]   This will continually check to see if the API for kubernetes is reachable.
I0103 23:07:52.548]   This may time out if there was some uncaught error during start up.
I0103 23:07:52.548] 
I0103 23:12:53.226] .....................................................................................................................Checking for custom logdump instances, if any
I0103 23:12:53.235] Sourcing kube-util.sh
I0103 23:12:53.334] Detecting project
I0103 23:12:53.334] Project: k8s-gke-gpu-boskos-04
I0103 23:12:53.334] Network Project: k8s-gke-gpu-boskos-04
I0103 23:12:53.335] Zone: us-west1-b
I0103 23:12:53.335] Dumping logs from master locally to '/workspace/_artifacts'
W0103 23:12:53.435] Using master: bootstrap-e2e-master (external IP: 34.82.133.82; internal IP: (not set))
W0103 23:12:53.441] Cluster failed to initialize within 300 seconds.
W0103 23:12:53.441] Last output from querying API server follows:
W0103 23:12:53.441] -----------------------------------------------------
W0103 23:12:53.442]   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
W0103 23:12:53.442]                                  Dload  Upload   Total   Spent    Left  Speed
W0103 23:12:53.442] 
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0curl: (7) Failed to connect to 34.82.133.82 port 443: Connection refused
W0103 23:12:53.442] -----------------------------------------------------
W0103 23:12:53.443] 2020/01/03 23:12:53 process.go:155: Step './hack/e2e-internal/e2e-up.sh' finished in 8m50.838043729s
W0103 23:12:53.443] 2020/01/03 23:12:53 e2e.go:534: Dumping logs locally to: /workspace/_artifacts
W0103 23:12:53.443] 2020/01/03 23:12:53 process.go:153: Running: ./cluster/log-dump/log-dump.sh /workspace/_artifacts
W0103 23:12:53.443] Trying to find master named 'bootstrap-e2e-master'
W0103 23:12:53.443] Looking for address 'bootstrap-e2e-master-ip'
... skipping 5 lines ...
W0103 23:13:38.288] scp: /var/log/glbc.log*: No such file or directory
W0103 23:13:38.288] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0103 23:13:38.289] scp: /var/log/kube-addon-manager.log*: No such file or directory
W0103 23:13:38.289] scp: /var/log/fluentd.log*: No such file or directory
W0103 23:13:38.289] scp: /var/log/kubelet.cov*: No such file or directory
W0103 23:13:38.290] scp: /var/log/startupscript.log*: No such file or directory
W0103 23:13:38.294] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0103 23:13:38.405] Dumping logs from nodes locally to '/workspace/_artifacts'
I0103 23:13:38.405] Detecting nodes in the cluster
I0103 23:14:22.636] Changing logfiles to be world-readable for download
I0103 23:14:22.721] Changing logfiles to be world-readable for download
I0103 23:14:23.082] Changing logfiles to be world-readable for download
I0103 23:14:26.519] Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from bootstrap-e2e-minion-group-gddk
... skipping 6 lines ...
W0103 23:14:28.253] 
W0103 23:14:28.254] Specify --start=40071 in the next get-serial-port-output invocation to get only the new output starting from here.
W0103 23:14:29.768] scp: /var/log/fluentd.log*: No such file or directory
W0103 23:14:29.769] scp: /var/log/node-problem-detector.log*: No such file or directory
W0103 23:14:29.769] scp: /var/log/kubelet.cov*: No such file or directory
W0103 23:14:29.769] scp: /var/log/startupscript.log*: No such file or directory
W0103 23:14:29.775] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0103 23:14:29.784] scp: /var/log/fluentd.log*: No such file or directory
W0103 23:14:29.785] scp: /var/log/node-problem-detector.log*: No such file or directory
W0103 23:14:29.785] scp: /var/log/kubelet.cov*: No such file or directory
W0103 23:14:29.785] scp: /var/log/startupscript.log*: No such file or directory
W0103 23:14:29.785] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0103 23:14:30.139] scp: /var/log/fluentd.log*: No such file or directory
W0103 23:14:30.140] scp: /var/log/node-problem-detector.log*: No such file or directory
W0103 23:14:30.140] scp: /var/log/kubelet.cov*: No such file or directory
W0103 23:14:30.141] scp: /var/log/startupscript.log*: No such file or directory
W0103 23:14:30.145] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0103 23:14:33.920] INSTANCE_GROUPS=bootstrap-e2e-minion-group
W0103 23:14:33.920] NODE_NAMES=bootstrap-e2e-minion-group-163s bootstrap-e2e-minion-group-c3rm bootstrap-e2e-minion-group-gddk
I0103 23:14:35.141] Failures for bootstrap-e2e-minion-group (if any):
W0103 23:14:36.334] 2020/01/03 23:14:36 process.go:155: Step './cluster/log-dump/log-dump.sh /workspace/_artifacts' finished in 1m43.111558399s
W0103 23:14:36.335] 2020/01/03 23:14:36 process.go:153: Running: ./hack/e2e-internal/e2e-down.sh
W0103 23:14:36.422] Project: k8s-gke-gpu-boskos-04
... skipping 13 lines ...
W0103 23:14:45.795] Deleting Managed Instance Group...
W0103 23:17:09.567] ...............................Deleted [https://www.googleapis.com/compute/v1/projects/k8s-gke-gpu-boskos-04/zones/us-west1-b/instanceGroupManagers/bootstrap-e2e-minion-group].
W0103 23:17:09.568] done.
W0103 23:17:15.279] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-gke-gpu-boskos-04/global/instanceTemplates/bootstrap-e2e-windows-node-template].
W0103 23:17:16.663] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-gke-gpu-boskos-04/global/instanceTemplates/bootstrap-e2e-minion-template].
I0103 23:18:20.117] Removing etcd replica, name: bootstrap-e2e-master, port: 2379, result: 1
W0103 23:18:20.219] Failed to execute 'curl -s --cacert /etc/srv/kubernetes/pki/etcd-apiserver-ca.crt --cert /etc/srv/kubernetes/pki/etcd-apiserver-client.crt --key /etc/srv/kubernetes/pki/etcd-apiserver-client.key https://127.0.0.1:2379/v2/members/$(curl -s --cacert /etc/srv/kubernetes/pki/etcd-apiserver-ca.crt --cert /etc/srv/kubernetes/pki/etcd-apiserver-client.crt --key /etc/srv/kubernetes/pki/etcd-apiserver-client.key https://127.0.0.1:2379/v2/members -XGET | sed 's/{\"id/\n/g' | grep bootstrap-e2e-master\" | cut -f 3 -d \") -XDELETE -L 2>/dev/null' on bootstrap-e2e-master despite 5 attempts
W0103 23:18:20.221] Last attempt failed with: 
I0103 23:18:22.006] Successfully executed 'curl -s  http://127.0.0.1:4002/v2/members/$(curl -s  http://127.0.0.1:4002/v2/members -XGET | sed 's/{\"id/\n/g' | grep bootstrap-e2e-master\" | cut -f 3 -d \") -XDELETE -L 2>/dev/null' on bootstrap-e2e-master
I0103 23:18:22.006] Removing etcd replica, name: bootstrap-e2e-master, port: 4002, result: 0
W0103 23:18:27.604] Updated [https://www.googleapis.com/compute/v1/projects/k8s-gke-gpu-boskos-04/zones/us-west1-b/instances/bootstrap-e2e-master].
W0103 23:21:07.077] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-gke-gpu-boskos-04/zones/us-west1-b/instances/bootstrap-e2e-master].
W0103 23:21:19.193] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-gke-gpu-boskos-04/global/firewalls/bootstrap-e2e-master-https].
W0103 23:21:20.218] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-gke-gpu-boskos-04/global/firewalls/bootstrap-e2e-master-etcd].
... skipping 18 lines ...
W0103 23:22:26.746] W0103 23:22:26.746349   10326 loader.go:223] Config not found: /workspace/.kube/config
W0103 23:22:26.756] 2020/01/03 23:22:26 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 7m50.42182471s
W0103 23:22:26.757] 2020/01/03 23:22:26 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
I0103 23:22:26.857] Property "contexts.k8s-gke-gpu-boskos-04_bootstrap-e2e" unset.
I0103 23:22:26.858] Cleared config for k8s-gke-gpu-boskos-04_bootstrap-e2e from /workspace/.kube/config
I0103 23:22:26.858] Done
W0103 23:22:32.514] 2020/01/03 23:22:32 main.go:319: Something went wrong: starting e2e cluster: error during ./hack/e2e-internal/e2e-up.sh: exit status 2
W0103 23:22:32.520] Traceback (most recent call last):
W0103 23:22:32.520]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 778, in <module>
W0103 23:22:32.521]     main(parse_args())
W0103 23:22:32.521]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 626, in main
W0103 23:22:32.521]     mode.start(runner_args)
W0103 23:22:32.521]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0103 23:22:32.521]     check_env(env, self.command, *args)
W0103 23:22:32.521]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0103 23:22:32.522]     subprocess.check_call(cmd, env=env)
W0103 23:22:32.522]   File "/usr/lib/python2.7/subprocess.py", line 190, in check_call
W0103 23:22:32.522]     raise CalledProcessError(retcode, cmd)
W0103 23:22:32.523] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--test', '--provider=gce', '--cluster=bootstrap-e2e', '--gcp-network=bootstrap-e2e', '--check-leaked-resources', '--check-version-skew=false', '--extract=ci/k8s-stable1', '--extract=ci/latest', '--gcp-node-image=gci', '--gcp-project-type=gpu-project', '--gcp-zone=us-west1-b', '--test_args=--kubectl-path=../../../../kubernetes_skew/cluster/kubectl.sh --minStartupPods=8 --ginkgo.skip=\\[.+\\]|Initializers|Dashboard', '--timeout=150m', '--upgrade_args=--ginkgo.focus=\\[Feature:GPUClusterDowngrade\\] --upgrade-target=ci/k8s-stable1 --upgrade-image=gci')' returned non-zero exit status 1
E0103 23:22:32.530] Command failed
I0103 23:22:32.530] process 269 exited with code 1 after 20.4m
E0103 23:22:32.531] FAIL: ci-kubernetes-e2e-gce-gpu-master-stable1-cluster-downgrade
I0103 23:22:32.531] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0103 23:22:33.153] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0103 23:22:33.216] process 10337 exited with code 0 after 0.0m
I0103 23:22:33.217] Call:  gcloud config get-value account
I0103 23:22:33.594] process 10350 exited with code 0 after 0.0m
I0103 23:22:33.595] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0103 23:22:33.596] Upload result and artifacts...
I0103 23:22:33.597] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-gpu-master-stable1-cluster-downgrade/1213233846846230531
I0103 23:22:33.597] Call:  gsutil ls gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-gpu-master-stable1-cluster-downgrade/1213233846846230531/artifacts
W0103 23:22:34.680] CommandException: One or more URLs matched no objects.
E0103 23:22:34.828] Command failed
I0103 23:22:34.828] process 10363 exited with code 1 after 0.0m
W0103 23:22:34.828] Remote dir gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-gpu-master-stable1-cluster-downgrade/1213233846846230531/artifacts not exist yet
I0103 23:22:34.828] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-gpu-master-stable1-cluster-downgrade/1213233846846230531/artifacts
I0103 23:22:37.731] process 10508 exited with code 0 after 0.0m
I0103 23:22:37.732] Call:  git rev-parse HEAD
W0103 23:22:37.738] fatal: not a git repository (or any of the parent directories): .git
E0103 23:22:37.739] Command failed
I0103 23:22:37.739] process 11155 exited with code 128 after 0.0m
I0103 23:22:37.739] Call:  git rev-parse HEAD
I0103 23:22:37.745] process 11156 exited with code 0 after 0.0m
I0103 23:22:37.745] Call:  gsutil stat gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-gpu-master-stable1-cluster-downgrade/jobResultsCache.json
I0103 23:22:38.983] process 11157 exited with code 0 after 0.0m
I0103 23:22:38.985] Call:  gsutil -q cat 'gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-gpu-master-stable1-cluster-downgrade/jobResultsCache.json#1578050423896250'
... skipping 8 lines ...