This job view page is being replaced by Spyglass soon. Check out the new job view.
PRLochanRn: Availability zones support for managed clusters
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-11-20 22:06
Elapsed2h15m
Revision3848ded6651623e8597c4fdb24881ec349577ccb
Refs 1564

No Test Failures!


Show 1 Skipped Tests

Error lines from build-log.txt

... skipping 434 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:579

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Sat, 20 Nov 2021 22:37:40 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-yeoedx" for hosting the cluster
Nov 20 22:37:40.539: INFO: starting to create namespace for hosting the "capz-e2e-yeoedx" test spec
2021/11/20 22:37:40 failed trying to get namespace (capz-e2e-yeoedx):namespaces "capz-e2e-yeoedx" not found
INFO: Creating namespace capz-e2e-yeoedx
INFO: Creating event watcher for namespace "capz-e2e-yeoedx"
Nov 20 22:37:40.576: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-yeoedx-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-yeoedx-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.4, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 53 lines ...
STEP: waiting for job default/curl-to-elb-jobmwibh1o9hsl to be complete
Nov 20 22:47:13.086: INFO: waiting for job default/curl-to-elb-jobmwibh1o9hsl to be complete
Nov 20 22:47:23.154: INFO: job default/curl-to-elb-jobmwibh1o9hsl is complete, took 10.06773196s
STEP: connecting directly to the external LB service
Nov 20 22:47:23.154: INFO: starting attempts to connect directly to the external LB service
2021/11/20 22:47:23 [DEBUG] GET http://52.184.249.80
2021/11/20 22:47:53 [ERR] GET http://52.184.249.80 request failed: Get "http://52.184.249.80": dial tcp 52.184.249.80:80: i/o timeout
2021/11/20 22:47:53 [DEBUG] GET http://52.184.249.80: retrying in 1s (4 left)
Nov 20 22:47:55.227: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 20 22:47:55.227: INFO: starting to delete external LB service webphfsjx-elb
Nov 20 22:47:55.288: INFO: starting to delete deployment webphfsjx
Nov 20 22:47:55.323: INFO: starting to delete job curl-to-elb-jobmwibh1o9hsl
... skipping 65 lines ...
STEP: Fetching activity logs took 607.808883ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-yeoedx" namespace
STEP: Deleting all clusters in the capz-e2e-yeoedx namespace
STEP: Deleting cluster capz-e2e-yeoedx-win-vmss
INFO: Waiting for the Cluster capz-e2e-yeoedx/capz-e2e-yeoedx-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-yeoedx-win-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-56g6n, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-yeoedx-win-vmss-control-plane-vvpp8, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-nxxzn, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-yeoedx-win-vmss-control-plane-vvpp8, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-jjnzf, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-kgt48, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-c29bt, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-qmpmg, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-gp649, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-b845r, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-yeoedx-win-vmss-control-plane-vvpp8, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-yeoedx-win-vmss-control-plane-vvpp8, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-yeoedx
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 24m28s on Ginkgo node 1 of 3

... skipping 3 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating a Windows enabled VMSS cluster with dockershim
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:578
    with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:579
------------------------------
{"component":"entrypoint","file":"prow/entrypoint/run.go:165","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","severity":"error","time":"2021-11-21T00:06:09Z"}
++ early_exit_handler
++ '[' -n 165 ']'
++ kill -TERM 165
++ cleanup_dind
++ [[ true == \t\r\u\e ]]
++ echo 'Cleaning up after docker'
... skipping 12 lines ...
================================================================================
Cleaning up after docker
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
================================================================================
Done cleaning up after docker in docker.
{"component":"entrypoint","file":"prow/entrypoint/run.go:255","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2021-11-21T00:21:09Z"}
{"component":"entrypoint","error":"os: process already finished","file":"prow/entrypoint/run.go:257","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2021-11-21T00:21:09Z"}