This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-10-11 12:48
Elapsed10m41s
Revisionrelease-0.3

No Test Failures!


Error lines from build-log.txt

... skipping 456 lines ...
    ubuntu-1804: changed: [default]
    ubuntu-1804:
    ubuntu-1804: TASK [kubernetes : Remove CNI tarball] *****************************************
    ubuntu-1804: changed: [default]
    ubuntu-1804:
    ubuntu-1804: TASK [kubernetes : Download Kubernetes binaries] *******************************
    ubuntu-1804: failed: [default] (item=kubeadm) => {"ansible_loop_var": "item", "changed": false, "dest": "/usr/bin/kubeadm", "elapsed": 0, "item": "kubeadm", "msg": "Request failed", "response": "HTTP Error 404: Not Found", "status_code": 404, "url": "https://storage.googleapis.com/kubernetes-release-dev/ci/v1.23.0-alpha.3.256+1f2813368eb0eb/bin/linux/amd64/kubeadm"}
    ubuntu-1804: failed: [default] (item=kubectl) => {"ansible_loop_var": "item", "changed": false, "dest": "/usr/bin/kubectl", "elapsed": 0, "item": "kubectl", "msg": "Request failed", "response": "HTTP Error 404: Not Found", "status_code": 404, "url": "https://storage.googleapis.com/kubernetes-release-dev/ci/v1.23.0-alpha.3.256+1f2813368eb0eb/bin/linux/amd64/kubectl"}
    ubuntu-1804: failed: [default] (item=kubelet) => {"ansible_loop_var": "item", "changed": false, "dest": "/usr/bin/kubelet", "elapsed": 0, "item": "kubelet", "msg": "Request failed", "response": "HTTP Error 404: Not Found", "status_code": 404, "url": "https://storage.googleapis.com/kubernetes-release-dev/ci/v1.23.0-alpha.3.256+1f2813368eb0eb/bin/linux/amd64/kubelet"}
    ubuntu-1804:
    ubuntu-1804: PLAY RECAP *********************************************************************
    ubuntu-1804: default                    : ok=48   changed=39   unreachable=0    failed=1    skipped=52   rescued=0    ignored=0
    ubuntu-1804:
==> ubuntu-1804: Provisioning step had errors: Running the cleanup provisioner, if present...
==> ubuntu-1804: Deleting instance...
    ubuntu-1804: Instance has been deleted!
==> ubuntu-1804: Deleting disk...
    ubuntu-1804: Disk has been deleted!
Build 'ubuntu-1804' errored after 5 minutes 33 seconds: Error executing Ansible: Non-zero exit status: exit status 2

==> Wait completed after 5 minutes 33 seconds

==> Some builds didn't complete successfully and had errors:
--> ubuntu-1804: Error executing Ansible: Non-zero exit status: exit status 2

==> Builds finished but no artifacts were created.
make: *** [Makefile:350: build-gce-ubuntu-1804] Error 1
+ exit-handler
+ unset KUBECONFIG
+ dump-logs
+ echo '=== versions ==='
=== versions ===
++ kind version
... skipping 5 lines ...
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:38:50Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
+ true
+ echo 'deployed cluster:'
deployed cluster:
+ kubectl --kubeconfig=/tmp/kubeconfig version
error: stat /tmp/kubeconfig: no such file or directory
+ true
+ echo ''

+ kubectl --context=kind-clusterapi get clusters,gcpclusters,machines,gcpmachines,kubeadmconfigs,machinedeployments,gcpmachinetemplates,kubeadmconfigtemplates,machinesets,kubeadmcontrolplanes --all-namespaces -o yaml
Error in configuration: context was not found for specified context: kind-clusterapi
+ true
+ echo 'images in docker'
+ docker images
+ echo 'images from bootstrap using containerd CLI'
+ docker exec clusterapi-control-plane ctr -n k8s.io images list
Error: No such container: clusterapi-control-plane
+ true
+ echo 'images in bootstrap cluster using kubectl CLI'
+ kubectl get pods --all-namespaces -o json
+ jq --raw-output '.items[].spec.containers[].image'
+ sort
The connection to the server localhost:8080 was refused - did you specify the right host or port?
+ true
+ echo 'images in deployed cluster using kubectl CLI'
+ kubectl --kubeconfig=/tmp/kubeconfig get pods --all-namespaces -o json
+ jq --raw-output '.items[].spec.containers[].image'
+ sort
error: stat /tmp/kubeconfig: no such file or directory
+ true
+ kubectl cluster-info dump
The connection to the server localhost:8080 was refused - did you specify the right host or port?
+ true
+ echo '=== gcloud compute instances list ==='
+ gcloud compute instances list --project k8s-gce-soak-1-5
Listed 0 items.
+ echo '=== cluster-info dump ==='
+ kubectl --kubeconfig=/tmp/kubeconfig cluster-info dump
error: stat /tmp/kubeconfig: no such file or directory
+ true
+ kind export logs --name=clusterapi /logs/artifacts/logs
ERROR: unknown cluster "clusterapi"
+ true
++ gcloud compute instances list '--filter=zone~'\''us-east4-.*'\''' --project k8s-gce-soak-1-5 '--format=value(name)'
WARNING: The following filter keys were not present in any resource : zone
+ gcloud logging read --order=asc '--format=table(timestamp,jsonPayload.resource.name,jsonPayload.event_subtype)' --project k8s-gce-soak-1-5 'timestamp >= "2021-10-11T12:48:52Z"'
+ cleanup
+ [[ '' = true ]]
++ go env GOPATH
+ cd /home/prow/go/src/k8s.io/kubernetes
+ rm -f _output/bin/e2e.test
+ gcloud compute forwarding-rules delete --project k8s-gce-soak-1-5 --global test1-apiserver --quiet
ERROR: (gcloud.compute.forwarding-rules.delete) Could not fetch resource:
 - The resource 'projects/k8s-gce-soak-1-5/global/forwardingRules/test1-apiserver' was not found

+ true
+ gcloud compute target-tcp-proxies delete --project k8s-gce-soak-1-5 test1-apiserver --quiet
ERROR: (gcloud.compute.target-tcp-proxies.delete) Some requests did not succeed:
 - The resource 'projects/k8s-gce-soak-1-5/global/targetTcpProxies/test1-apiserver' was not found

+ true
+ gcloud compute backend-services delete --project k8s-gce-soak-1-5 --global test1-apiserver --quiet
ERROR: (gcloud.compute.backend-services.delete) Some requests did not succeed:
 - The resource 'projects/k8s-gce-soak-1-5/global/backendServices/test1-apiserver' was not found

+ true
+ gcloud compute health-checks delete --project k8s-gce-soak-1-5 test1-apiserver --quiet
ERROR: (gcloud.compute.health-checks.delete) Could not fetch resource:
 - The resource 'projects/k8s-gce-soak-1-5/global/healthChecks/test1-apiserver' was not found

+ true
+ gcloud compute instances list --project k8s-gce-soak-1-5
+ grep test1
+ awk '{print "gcloud compute instances delete --project k8s-gce-soak-1-5 --quiet " $1 " --zone " $2 "\n"}'
... skipping 54 lines ...