This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 6 succeeded
Started2020-01-14 16:18
Elapsed19m1s
Revision
Buildergke-prow-default-pool-cf4891d4-r4zq
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/36b98d5d-37a5-4842-9c14-d32297131467/targets/test'}}
pod5ebb15cd-36e9-11ea-9f20-3687633bf296
resultstorehttps://source.cloud.google.com/results/invocations/36b98d5d-37a5-4842-9c14-d32297131467/targets/test
infra-commit55d428df3
job-versionv1.17.1-beta.0.52+d224476cd0730b
pod5ebb15cd-36e9-11ea-9f20-3687633bf296
revisionv1.17.1-beta.0.52+d224476cd0730b

Test Failures


Up 7m44s

error during ./hack/e2e-internal/e2e-up.sh: exit status 2
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 6 Passed Tests

Error lines from build-log.txt

... skipping 15 lines ...
I0114 16:18:19.909] process 51 exited with code 0 after 0.0m
I0114 16:18:19.910] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0114 16:18:19.910] Root: /workspace
I0114 16:18:19.910] cd to /workspace
I0114 16:18:19.910] Configure environment...
I0114 16:18:19.911] Call:  git show -s --format=format:%ct HEAD
W0114 16:18:19.918] fatal: not a git repository (or any of the parent directories): .git
I0114 16:18:19.919] process 64 exited with code 128 after 0.0m
W0114 16:18:19.919] Unable to print commit date for HEAD
I0114 16:18:19.919] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0114 16:18:20.664] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0114 16:18:21.088] process 65 exited with code 0 after 0.0m
I0114 16:18:21.089] Call:  gcloud config get-value account
... skipping 380 lines ...
W0114 16:22:50.393] Trying to find master named 'bootstrap-e2e-master'
W0114 16:22:50.393] Looking for address 'bootstrap-e2e-master-ip'
W0114 16:22:51.502] Using master: bootstrap-e2e-master (external IP: 35.203.134.59; internal IP: (not set))
I0114 16:22:51.603] Waiting up to 300 seconds for cluster initialization.
I0114 16:22:51.603] 
I0114 16:22:51.603]   This will continually check to see if the API for kubernetes is reachable.
I0114 16:22:51.603]   This may time out if there was some uncaught error during start up.
I0114 16:22:51.603] 
I0114 16:27:58.698] ..................................................................................Checking for custom logdump instances, if any
I0114 16:27:58.704] Sourcing kube-util.sh
I0114 16:27:58.768] Detecting project
I0114 16:27:58.768] Project: k8s-gce-cvm-1-5-1-6-ctl-skew
I0114 16:27:58.769] Network Project: k8s-gce-cvm-1-5-1-6-ctl-skew
I0114 16:27:58.769] Zone: us-west1-b
I0114 16:27:58.769] Dumping logs from master locally to '/workspace/_artifacts'
W0114 16:27:58.870] Cluster failed to initialize within 300 seconds.
W0114 16:27:58.870] Last output from querying API server follows:
W0114 16:27:58.871] -----------------------------------------------------
W0114 16:27:58.871]   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
W0114 16:27:58.871]                                  Dload  Upload   Total   Spent    Left  Speed
W0114 16:27:58.872] 
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:02 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:03 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:04 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0
W0114 16:27:58.872] curl: (28) Operation timed out after 5000 milliseconds with 0 out of 0 bytes received
... skipping 11 lines ...
W0114 16:28:39.958] scp: /var/log/glbc.log*: No such file or directory
W0114 16:28:39.959] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0114 16:28:39.959] scp: /var/log/kube-addon-manager.log*: No such file or directory
W0114 16:28:39.959] scp: /var/log/fluentd.log*: No such file or directory
W0114 16:28:39.959] scp: /var/log/kubelet.cov*: No such file or directory
W0114 16:28:39.959] scp: /var/log/startupscript.log*: No such file or directory
W0114 16:28:39.963] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0114 16:28:40.075] Dumping logs from nodes locally to '/workspace/_artifacts'
I0114 16:28:40.076] Detecting nodes in the cluster
W0114 16:28:50.244] ERROR: (gcloud.compute.ssh) Could not fetch resource:
W0114 16:28:50.244]  - Internal error. Please try again or contact Google Support. (Code: '-7175473716465555063')
W0114 16:28:50.244] 
I0114 16:29:25.571] Changing logfiles to be world-readable for download
I0114 16:29:26.438] Changing logfiles to be world-readable for download
I0114 16:29:29.809] Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from bootstrap-e2e-minion-group-hrtg
I0114 16:29:30.596] Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from bootstrap-e2e-minion-group-gfdc
W0114 16:29:31.247] 
W0114 16:29:31.248] Specify --start=47207 in the next get-serial-port-output invocation to get only the new output starting from here.
W0114 16:29:31.965] 
W0114 16:29:31.966] Specify --start=47201 in the next get-serial-port-output invocation to get only the new output starting from here.
W0114 16:29:33.322] scp: /var/log/fluentd.log*: No such file or directory
W0114 16:29:33.323] scp: /var/log/node-problem-detector.log*: No such file or directory
W0114 16:29:33.323] scp: /var/log/kubelet.cov*: No such file or directory
W0114 16:29:33.324] scp: /var/log/startupscript.log*: No such file or directory
W0114 16:29:33.327] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0114 16:29:33.797] Changing logfiles to be world-readable for download
W0114 16:29:34.006] scp: /var/log/fluentd.log*: No such file or directory
W0114 16:29:34.007] scp: /var/log/node-problem-detector.log*: No such file or directory
W0114 16:29:34.007] scp: /var/log/kubelet.cov*: No such file or directory
W0114 16:29:34.008] scp: /var/log/startupscript.log*: No such file or directory
W0114 16:29:34.011] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0114 16:29:37.525] Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from bootstrap-e2e-minion-group-6n5z
W0114 16:29:38.888] 
W0114 16:29:38.888] Specify --start=47196 in the next get-serial-port-output invocation to get only the new output starting from here.
W0114 16:29:40.745] scp: /var/log/fluentd.log*: No such file or directory
W0114 16:29:40.746] scp: /var/log/node-problem-detector.log*: No such file or directory
W0114 16:29:40.746] scp: /var/log/kubelet.cov*: No such file or directory
W0114 16:29:40.746] scp: /var/log/startupscript.log*: No such file or directory
W0114 16:29:40.749] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0114 16:29:44.879] INSTANCE_GROUPS=bootstrap-e2e-minion-group
W0114 16:29:44.879] NODE_NAMES=bootstrap-e2e-minion-group-6n5z bootstrap-e2e-minion-group-gfdc bootstrap-e2e-minion-group-hrtg
I0114 16:29:46.127] Failures for bootstrap-e2e-minion-group (if any):
W0114 16:29:47.692] 2020/01/14 16:29:47 process.go:155: Step './cluster/log-dump/log-dump.sh /workspace/_artifacts' finished in 1m48.99762421s
W0114 16:29:47.693] 2020/01/14 16:29:47 process.go:153: Running: ./hack/e2e-internal/e2e-down.sh
W0114 16:29:47.839] Project: k8s-gce-cvm-1-5-1-6-ctl-skew
... skipping 12 lines ...
W0114 16:29:55.351] NODE_NAMES=bootstrap-e2e-minion-group-6n5z bootstrap-e2e-minion-group-gfdc bootstrap-e2e-minion-group-hrtg
W0114 16:29:58.584] Deleting Managed Instance Group...
W0114 16:32:04.782] ............................Deleted [https://www.googleapis.com/compute/v1/projects/k8s-gce-cvm-1-5-1-6-ctl-skew/zones/us-west1-b/instanceGroupManagers/bootstrap-e2e-minion-group].
W0114 16:32:04.783] done.
W0114 16:32:12.703] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-gce-cvm-1-5-1-6-ctl-skew/global/instanceTemplates/bootstrap-e2e-minion-template].
W0114 16:32:13.912] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-gce-cvm-1-5-1-6-ctl-skew/global/instanceTemplates/bootstrap-e2e-windows-node-template].
W0114 16:33:17.791] Failed to execute 'curl -s --cacert /etc/srv/kubernetes/pki/etcd-apiserver-ca.crt --cert /etc/srv/kubernetes/pki/etcd-apiserver-client.crt --key /etc/srv/kubernetes/pki/etcd-apiserver-client.key https://127.0.0.1:2379/v2/members/$(curl -s --cacert /etc/srv/kubernetes/pki/etcd-apiserver-ca.crt --cert /etc/srv/kubernetes/pki/etcd-apiserver-client.crt --key /etc/srv/kubernetes/pki/etcd-apiserver-client.key https://127.0.0.1:2379/v2/members -XGET | sed 's/{\"id/\n/g' | grep bootstrap-e2e-master\" | cut -f 3 -d \") -XDELETE -L 2>/dev/null' on bootstrap-e2e-master despite 5 attempts
W0114 16:33:17.792] Last attempt failed with: 
I0114 16:33:17.892] Removing etcd replica, name: bootstrap-e2e-master, port: 2379, result: 1
I0114 16:33:19.564] Successfully executed 'curl -s  http://127.0.0.1:4002/v2/members/$(curl -s  http://127.0.0.1:4002/v2/members -XGET | sed 's/{\"id/\n/g' | grep bootstrap-e2e-master\" | cut -f 3 -d \") -XDELETE -L 2>/dev/null' on bootstrap-e2e-master
I0114 16:33:19.565] Removing etcd replica, name: bootstrap-e2e-master, port: 4002, result: 0
W0114 16:33:24.986] Updated [https://www.googleapis.com/compute/v1/projects/k8s-gce-cvm-1-5-1-6-ctl-skew/zones/us-west1-b/instances/bootstrap-e2e-master].
W0114 16:35:46.990] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-gce-cvm-1-5-1-6-ctl-skew/zones/us-west1-b/instances/bootstrap-e2e-master].
W0114 16:35:57.232] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-gce-cvm-1-5-1-6-ctl-skew/global/firewalls/bootstrap-e2e-master-https].
... skipping 19 lines ...
I0114 16:37:10.024] Cleared config for k8s-gce-cvm-1-5-1-6-ctl-skew_bootstrap-e2e from /workspace/.kube/config
I0114 16:37:10.025] Done
W0114 16:37:10.125] W0114 16:37:10.020059   10228 loader.go:223] Config not found: /workspace/.kube/config
W0114 16:37:10.126] W0114 16:37:10.020248   10228 loader.go:223] Config not found: /workspace/.kube/config
W0114 16:37:10.126] 2020/01/14 16:37:10 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 7m22.335157623s
W0114 16:37:10.126] 2020/01/14 16:37:10 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0114 16:37:10.966] 2020/01/14 16:37:10 main.go:316: Something went wrong: starting e2e cluster: error during ./hack/e2e-internal/e2e-up.sh: exit status 2
W0114 16:37:10.970] Traceback (most recent call last):
W0114 16:37:10.970]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 778, in <module>
W0114 16:37:10.972]     main(parse_args())
W0114 16:37:10.972]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 626, in main
W0114 16:37:10.972]     mode.start(runner_args)
W0114 16:37:10.972]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0114 16:37:10.972]     check_env(env, self.command, *args)
W0114 16:37:10.973]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0114 16:37:10.973]     subprocess.check_call(cmd, env=env)
W0114 16:37:10.973]   File "/usr/lib/python2.7/subprocess.py", line 190, in check_call
W0114 16:37:10.973]     raise CalledProcessError(retcode, cmd)
W0114 16:37:10.974] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--test', '--provider=gce', '--cluster=bootstrap-e2e', '--gcp-network=bootstrap-e2e', '--check-leaked-resources', '--check-version-skew=false', '--extract=ci/k8s-stable1', '--extract=ci/k8s-beta', '--gcp-node-image=gci', '--gcp-zone=us-west1-b', '--ginkgo-parallel', '--skew', '--test_args=--ginkgo.skip=\\[Slow\\]|\\[Serial\\]|\\[Disruptive\\]|\\[Flaky\\]|\\[Feature:.+\\] --kubectl-path=../../../../kubernetes_skew/cluster/kubectl.sh --minStartupPods=8', '--timeout=120m', '--upgrade_args=--ginkgo.focus=\\[Feature:ClusterDowngrade\\] --upgrade-target=ci/k8s-stable1 --upgrade-image=gci')' returned non-zero exit status 1
E0114 16:37:10.983] Command failed
I0114 16:37:10.983] process 272 exited with code 1 after 18.8m
E0114 16:37:10.983] FAIL: ci-kubernetes-e2e-gce-beta-stable1-downgrade-cluster-parallel
I0114 16:37:10.983] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0114 16:37:11.542] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0114 16:37:11.597] process 10240 exited with code 0 after 0.0m
I0114 16:37:11.597] Call:  gcloud config get-value account
I0114 16:37:11.943] process 10253 exited with code 0 after 0.0m
I0114 16:37:11.943] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0114 16:37:11.944] Upload result and artifacts...
I0114 16:37:11.944] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-beta-stable1-downgrade-cluster-parallel/1217118675115446274
I0114 16:37:11.944] Call:  gsutil ls gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-beta-stable1-downgrade-cluster-parallel/1217118675115446274/artifacts
W0114 16:37:12.895] CommandException: One or more URLs matched no objects.
E0114 16:37:13.021] Command failed
I0114 16:37:13.021] process 10266 exited with code 1 after 0.0m
W0114 16:37:13.021] Remote dir gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-beta-stable1-downgrade-cluster-parallel/1217118675115446274/artifacts not exist yet
I0114 16:37:13.022] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-beta-stable1-downgrade-cluster-parallel/1217118675115446274/artifacts
I0114 16:37:15.408] process 10411 exited with code 0 after 0.0m
I0114 16:37:15.409] Call:  git rev-parse HEAD
W0114 16:37:15.413] fatal: not a git repository (or any of the parent directories): .git
E0114 16:37:15.413] Command failed
I0114 16:37:15.413] process 11058 exited with code 128 after 0.0m
I0114 16:37:15.413] Call:  git rev-parse HEAD
I0114 16:37:15.420] process 11059 exited with code 0 after 0.0m
I0114 16:37:15.420] Call:  gsutil stat gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-beta-stable1-downgrade-cluster-parallel/jobResultsCache.json
I0114 16:37:16.454] process 11060 exited with code 0 after 0.0m
I0114 16:37:16.455] Call:  gsutil -q cat 'gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-beta-stable1-downgrade-cluster-parallel/jobResultsCache.json#1579012557530803'
... skipping 8 lines ...