This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 8 succeeded
Started2022-08-09 10:41
Elapsed16m11s
Revision
Builderd98c3259-17cf-11ed-ad9d-7278861c489e
infra-commite2d16d6fd
job-versionv1.22.13-rc.0.3+5dd5cf1575bb73
kubetest-versionv20220804-4fa19ea91a
revisionv1.22.13-rc.0.3+5dd5cf1575bb73

Test Failures


kubetest Up 7m14s

error during ./hack/e2e-internal/e2e-up.sh: exit status 2
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 8 Passed Tests

Error lines from build-log.txt

... skipping 15 lines ...
I0809 10:41:46.208] process 58 exited with code 0 after 0.0m
I0809 10:41:46.208] Will upload results to gs://kubernetes-jenkins/logs using prow-build@k8s-infra-prow-build.iam.gserviceaccount.com
I0809 10:41:46.208] Root: /workspace
I0809 10:41:46.208] cd to /workspace
I0809 10:41:46.208] Configure environment...
I0809 10:41:46.209] Call:  git show -s --format=format:%ct HEAD
W0809 10:41:46.212] fatal: not a git repository (or any of the parent directories): .git
I0809 10:41:46.212] process 72 exited with code 128 after 0.0m
W0809 10:41:46.212] Unable to print commit date for HEAD
I0809 10:41:46.212] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0809 10:41:47.043] Activated service account credentials for: [prow-build@k8s-infra-prow-build.iam.gserviceaccount.com]
I0809 10:41:47.230] process 73 exited with code 0 after 0.0m
I0809 10:41:47.231] Call:  gcloud config get-value account
... skipping 359 lines ...
W0809 10:45:18.025] Trying to find master named 'test-a8a0b221b9-master'
W0809 10:45:18.025] Looking for address 'test-a8a0b221b9-master-ip'
W0809 10:45:19.232] Using master: test-a8a0b221b9-master (external IP: 34.168.201.130; internal IP: (not set))
I0809 10:45:19.333] Waiting up to 300 seconds for cluster initialization.
I0809 10:45:19.333] 
I0809 10:45:19.333]   This will continually check to see if the API for kubernetes is reachable.
I0809 10:45:19.333]   This may time out if there was some uncaught error during start up.
I0809 10:45:19.333] 
I0809 10:50:21.569] ............................................................................................................................................Checking for custom logdump instances, if any
I0809 10:50:21.574] ----------------------------------------------------------------------------------------------------
I0809 10:50:21.575] k/k version of the log-dump.sh script is deprecated!
I0809 10:50:21.575] Please migrate your test job to use test-infra's repo version of log-dump.sh!
I0809 10:50:21.576] Migration steps can be found in the readme file.
I0809 10:50:21.576] ----------------------------------------------------------------------------------------------------
I0809 10:50:21.576] Sourcing kube-util.sh
I0809 10:50:21.639] Detecting project
I0809 10:50:21.640] Project: k8s-infra-e2e-boskos-101
I0809 10:50:21.640] Network Project: k8s-infra-e2e-boskos-101
I0809 10:50:21.640] Zone: us-west1-b
I0809 10:50:21.649] Dumping logs from master locally to '/workspace/_artifacts'
W0809 10:50:21.750] Cluster failed to initialize within 300 seconds.
W0809 10:50:21.750] Last output from querying API server follows:
W0809 10:50:21.750] -----------------------------------------------------
W0809 10:50:21.750] * Expire in 0 ms for 6 (transfer 0x5578d45190f0)
W0809 10:50:21.750] * Expire in 5000 ms for 8 (transfer 0x5578d45190f0)
W0809 10:50:21.750] *   Trying 34.168.201.130...
W0809 10:50:21.750] * TCP_NODELAY set
W0809 10:50:21.751] * Expire in 200 ms for 4 (transfer 0x5578d45190f0)
W0809 10:50:21.751] * connect to 34.168.201.130 port 443 failed: Connection refused
W0809 10:50:21.751] * Failed to connect to 34.168.201.130 port 443: Connection refused
W0809 10:50:21.751] * Closing connection 0
W0809 10:50:21.751] curl: (7) Failed to connect to 34.168.201.130 port 443: Connection refused
W0809 10:50:21.751] -----------------------------------------------------
W0809 10:50:21.752] 2022/08/09 10:50:21 process.go:155: Step './hack/e2e-internal/e2e-up.sh' finished in 7m14.063525176s
W0809 10:50:21.752] 2022/08/09 10:50:21 e2e.go:574: Dumping logs locally to: /workspace/_artifacts
W0809 10:50:21.752] 2022/08/09 10:50:21 process.go:153: Running: ./cluster/log-dump/log-dump.sh /workspace/_artifacts
W0809 10:50:21.752] Trying to find master named 'test-a8a0b221b9-master'
W0809 10:50:21.752] Looking for address 'test-a8a0b221b9-master-ip'
... skipping 5 lines ...
W0809 10:51:07.336] scp: /var/log/glbc.log*: No such file or directory
W0809 10:51:07.336] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0809 10:51:07.337] scp: /var/log/kube-addon-manager.log*: No such file or directory
W0809 10:51:07.415] scp: /var/log/fluentd.log*: No such file or directory
W0809 10:51:07.415] scp: /var/log/kubelet.cov*: No such file or directory
W0809 10:51:07.415] scp: /var/log/startupscript.log*: No such file or directory
W0809 10:51:07.419] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0809 10:51:07.658] Dumping logs from nodes locally to '/workspace/_artifacts'
I0809 10:51:07.658] Detecting nodes in the cluster
W0809 10:52:09.195] 
W0809 10:52:09.196] Recommendation: To check for possible causes of SSH connectivity issues and get
W0809 10:52:09.196] recommendations, rerun the ssh command with the --troubleshoot option.
W0809 10:52:09.196] 
W0809 10:52:09.196] gcloud compute ssh test-a8a0b221b9-minion-group-g2fq --project=k8s-infra-e2e-boskos-101 --zone=us-west1-b --troubleshoot
W0809 10:52:09.197] 
W0809 10:52:09.197] Or, to investigate an IAP tunneling issue:
W0809 10:52:09.197] 
W0809 10:52:09.197] gcloud compute ssh test-a8a0b221b9-minion-group-g2fq --project=k8s-infra-e2e-boskos-101 --zone=us-west1-b --troubleshoot --tunnel-through-iap
W0809 10:52:09.197] 
W0809 10:52:09.198] ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
I0809 10:52:23.621] Changing logfiles to be world-readable for download
W0809 10:52:25.577] 
W0809 10:52:25.578] Recommendation: To check for possible causes of SSH connectivity issues and get
W0809 10:52:25.578] recommendations, rerun the ssh command with the --troubleshoot option.
W0809 10:52:25.578] 
W0809 10:52:25.578] gcloud compute ssh test-a8a0b221b9-minion-group-g2fq --project=k8s-infra-e2e-boskos-101 --zone=us-west1-b --troubleshoot
W0809 10:52:25.579] 
W0809 10:52:25.579] Or, to investigate an IAP tunneling issue:
W0809 10:52:25.589] 
W0809 10:52:25.589] gcloud compute ssh test-a8a0b221b9-minion-group-g2fq --project=k8s-infra-e2e-boskos-101 --zone=us-west1-b --troubleshoot --tunnel-through-iap
W0809 10:52:25.589] 
W0809 10:52:25.590] ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
I0809 10:52:26.305] Changing logfiles to be world-readable for download
I0809 10:52:29.629] Copying 'kube-proxy.log containers/konnectivity-agent-*.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from test-a8a0b221b9-minion-group-g02p
W0809 10:52:30.988] 
W0809 10:52:30.988] Specify --start=91900 in the next get-serial-port-output invocation to get only the new output starting from here.
I0809 10:52:31.914] Copying 'kube-proxy.log containers/konnectivity-agent-*.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from test-a8a0b221b9-minion-group-tdk1
W0809 10:52:34.156] 
W0809 10:52:34.156] Specify --start=91565 in the next get-serial-port-output invocation to get only the new output starting from here.
W0809 10:52:34.789] scp: /var/log/kube-proxy.log*: No such file or directory
W0809 10:52:34.789] scp: /var/log/containers/konnectivity-agent-*.log*: No such file or directory
W0809 10:52:34.790] scp: /var/log/fluentd.log*: No such file or directory
W0809 10:52:34.790] scp: /var/log/node-problem-detector.log*: No such file or directory
W0809 10:52:34.790] scp: /var/log/kubelet.cov*: No such file or directory
W0809 10:52:34.790] scp: /var/log/startupscript.log*: No such file or directory
W0809 10:52:34.793] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0809 10:52:37.648] scp: /var/log/kube-proxy.log*: No such file or directory
W0809 10:52:37.649] scp: /var/log/containers/konnectivity-agent-*.log*: No such file or directory
W0809 10:52:37.649] scp: /var/log/fluentd.log*: No such file or directory
W0809 10:52:37.649] scp: /var/log/node-problem-detector.log*: No such file or directory
W0809 10:52:37.649] scp: /var/log/kubelet.cov*: No such file or directory
W0809 10:52:37.649] scp: /var/log/startupscript.log*: No such file or directory
W0809 10:52:37.652] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0809 10:52:46.065] Changing logfiles to be world-readable for download
I0809 10:52:49.938] Copying 'kube-proxy.log containers/konnectivity-agent-*.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from test-a8a0b221b9-minion-group-g2fq
W0809 10:52:50.946] 
W0809 10:52:50.947] Specify --start=91434 in the next get-serial-port-output invocation to get only the new output starting from here.
W0809 10:52:52.824] scp: /var/log/kube-proxy.log*: No such file or directory
W0809 10:52:52.824] scp: /var/log/containers/konnectivity-agent-*.log*: No such file or directory
W0809 10:52:52.825] scp: /var/log/fluentd.log*: No such file or directory
W0809 10:52:52.825] scp: /var/log/node-problem-detector.log*: No such file or directory
W0809 10:52:52.825] scp: /var/log/kubelet.cov*: No such file or directory
W0809 10:52:52.825] scp: /var/log/startupscript.log*: No such file or directory
W0809 10:52:52.828] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0809 10:52:57.323] INSTANCE_GROUPS=test-a8a0b221b9-minion-group
W0809 10:52:57.324] NODE_NAMES=test-a8a0b221b9-minion-group-g02p test-a8a0b221b9-minion-group-g2fq test-a8a0b221b9-minion-group-tdk1
I0809 10:52:58.628] Failures for test-a8a0b221b9-minion-group (if any):
W0809 10:52:59.947] 2022/08/09 10:52:59 process.go:155: Step './cluster/log-dump/log-dump.sh /workspace/_artifacts' finished in 2m38.3812509s
W0809 10:52:59.948] 2022/08/09 10:52:59 process.go:153: Running: ./hack/e2e-internal/e2e-down.sh
W0809 10:53:00.013] Project: k8s-infra-e2e-boskos-101
... skipping 14 lines ...
I0809 10:53:08.484] Bringing down cluster
W0809 10:53:34.296] Deleting Managed Instance Group...
W0809 10:53:34.297] ..Deleted [https://www.googleapis.com/compute/v1/projects/k8s-infra-e2e-boskos-101/zones/us-west1-b/instanceGroupManagers/test-a8a0b221b9-minion-group].
W0809 10:53:34.300] done.
W0809 10:53:41.936] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-infra-e2e-boskos-101/global/instanceTemplates/test-a8a0b221b9-minion-template].
W0809 10:53:42.763] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-infra-e2e-boskos-101/global/instanceTemplates/test-a8a0b221b9-windows-node-template].
W0809 10:54:45.611] Failed to execute 'curl -s --cacert /etc/srv/kubernetes/pki/etcd-apiserver-ca.crt --cert /etc/srv/kubernetes/pki/etcd-apiserver-client.crt --key /etc/srv/kubernetes/pki/etcd-apiserver-client.key https://127.0.0.1:2379/v2/members/$(curl -s --cacert /etc/srv/kubernetes/pki/etcd-apiserver-ca.crt --cert /etc/srv/kubernetes/pki/etcd-apiserver-client.crt --key /etc/srv/kubernetes/pki/etcd-apiserver-client.key https://127.0.0.1:2379/v2/members -XGET | sed 's/{\"id/\n/g' | grep test-a8a0b221b9-master\" | cut -f 3 -d \") -XDELETE -L 2>/dev/null' on test-a8a0b221b9-master despite 5 attempts
W0809 10:54:45.611] Last attempt failed with: 
I0809 10:54:45.712] Removing etcd replica, name: test-a8a0b221b9-master, port: 2379, result: 1
W0809 10:55:45.484] Failed to execute 'curl -s  http://127.0.0.1:4002/v2/members/$(curl -s  http://127.0.0.1:4002/v2/members -XGET | sed 's/{\"id/\n/g' | grep test-a8a0b221b9-master\" | cut -f 3 -d \") -XDELETE -L 2>/dev/null' on test-a8a0b221b9-master despite 5 attempts
W0809 10:55:45.484] Last attempt failed with: 
I0809 10:55:45.584] Removing etcd replica, name: test-a8a0b221b9-master, port: 4002, result: 1
W0809 10:55:49.692] Updated [https://www.googleapis.com/compute/v1/projects/k8s-infra-e2e-boskos-101/zones/us-west1-b/instances/test-a8a0b221b9-master].
W0809 10:56:05.226] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-infra-e2e-boskos-101/zones/us-west1-b/instances/test-a8a0b221b9-master].
W0809 10:56:07.714] WARNING: The following filter keys were not present in any resource : name
W0809 10:56:16.227] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-infra-e2e-boskos-101/global/firewalls/test-a8a0b221b9-master-https].
W0809 10:56:18.882] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-infra-e2e-boskos-101/global/firewalls/test-a8a0b221b9-master-etcd].
... skipping 20 lines ...
I0809 10:57:42.398] Cleared config for k8s-infra-e2e-boskos-101_test-a8a0b221b9 from /workspace/.kube/config
I0809 10:57:42.398] Done
W0809 10:57:42.420] W0809 10:57:42.395877   11326 loader.go:221] Config not found: /workspace/.kube/config
W0809 10:57:42.420] W0809 10:57:42.396053   11326 loader.go:221] Config not found: /workspace/.kube/config
W0809 10:57:42.420] 2022/08/09 10:57:42 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 4m42.452046123s
W0809 10:57:42.420] 2022/08/09 10:57:42 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0809 10:57:42.421] 2022/08/09 10:57:42 main.go:331: Something went wrong: starting e2e cluster: error during ./hack/e2e-internal/e2e-up.sh: exit status 2
W0809 10:57:42.421] Traceback (most recent call last):
W0809 10:57:42.421]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 723, in <module>
W0809 10:57:42.421]     main(parse_args())
W0809 10:57:42.421]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 569, in main
W0809 10:57:42.421]     mode.start(runner_args)
W0809 10:57:42.421]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 228, in start
W0809 10:57:42.422]     check_env(env, self.command, *args)
W0809 10:57:42.422]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0809 10:57:42.422]     subprocess.check_call(cmd, env=env)
W0809 10:57:42.422]   File "/usr/lib/python2.7/subprocess.py", line 190, in check_call
W0809 10:57:42.422]     raise CalledProcessError(retcode, cmd)
W0809 10:57:42.423] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--test', '--provider=gce', '--cluster=test-a8a0b221b9', '--gcp-network=test-a8a0b221b9', '--check-leaked-resources', '--gcp-zone=us-west1-b', '--gcp-node-image=gci', '--extract=ci/latest-1.22', '--extract-ci-bucket=k8s-release-dev', '--timeout=180m', '--runtime-config=api/all=true', '--test_args=--ginkgo.focus=\\[Feature:(Audit|BlockVolume|PodPreset|ExpandCSIVolumes|ExpandInUseVolumes)\\]|Networking --ginkgo.skip=Networking-Performance|IPv6|Feature:(Volumes|SCTPConnectivity) --minStartupPods=8')' returned non-zero exit status 1
E0809 10:57:42.423] Command failed
I0809 10:57:42.423] process 273 exited with code 1 after 15.9m
E0809 10:57:42.423] FAIL: ci-kubernetes-e2e-gce-cos-k8sstable2-alphafeatures
I0809 10:57:42.423] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0809 10:57:43.184] Activated service account credentials for: [prow-build@k8s-infra-prow-build.iam.gserviceaccount.com]
I0809 10:57:43.309] process 11337 exited with code 0 after 0.0m
I0809 10:57:43.309] Call:  gcloud config get-value account
I0809 10:57:43.998] process 11351 exited with code 0 after 0.0m
I0809 10:57:43.999] Will upload results to gs://kubernetes-jenkins/logs using prow-build@k8s-infra-prow-build.iam.gserviceaccount.com
I0809 10:57:43.999] Upload result and artifacts...
I0809 10:57:43.999] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-cos-k8sstable2-alphafeatures/1556953850399690752
I0809 10:57:43.999] Call:  gsutil ls gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-cos-k8sstable2-alphafeatures/1556953850399690752/artifacts
W0809 10:57:45.155] CommandException: One or more URLs matched no objects.
E0809 10:57:45.388] Command failed
I0809 10:57:45.389] process 11365 exited with code 1 after 0.0m
W0809 10:57:45.389] Remote dir gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-cos-k8sstable2-alphafeatures/1556953850399690752/artifacts not exist yet
I0809 10:57:45.389] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-cos-k8sstable2-alphafeatures/1556953850399690752/artifacts
I0809 10:57:50.589] process 11505 exited with code 0 after 0.1m
I0809 10:57:50.590] Call:  git rev-parse HEAD
W0809 10:57:50.593] fatal: not a git repository (or any of the parent directories): .git
E0809 10:57:50.593] Command failed
I0809 10:57:50.593] process 12133 exited with code 128 after 0.0m
I0809 10:57:50.594] Call:  git rev-parse HEAD
I0809 10:57:50.597] process 12134 exited with code 0 after 0.0m
I0809 10:57:50.597] Call:  gsutil stat gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-cos-k8sstable2-alphafeatures/jobResultsCache.json
I0809 10:57:51.928] process 12135 exited with code 0 after 0.0m
I0809 10:57:51.929] Call:  gsutil -q cat 'gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-cos-k8sstable2-alphafeatures/jobResultsCache.json#1660021071796578'
... skipping 8 lines ...