This job view page is being replaced by Spyglass soon. Check out the new job view.
PRnader-ziada: dont use hard-coded value for manager namespace
ResultFAILURE
Tests 1 failed / 4 succeeded
Started2021-03-04 22:17
Elapsed1h2m
Revisionec2cf91a342931e8853d6689d9bad9e0c672c873
Refs 1209

Test Failures


capz-e2e Workload cluster creation Creating a VMSS cluster with a single control plane node and an AzureMachinePool with 2 nodes 19m29s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\sVMSS\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\san\sAzureMachinePool\swith\s2\snodes$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:265
Timed out after 900.000s.
Expected
    <int>: 0
to equal
    <int>: 2
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api@v0.3.11-0.20210209200458-51a6d64d171c/test/framework/machinepool_helpers.go:85
				
				Click to see stdout/stderrfrom junit.e2e_suite.2.xml

Filter through log files | View test history on testgrid


Show 4 Passed Tests

Show 13 Skipped Tests

Error lines from build-log.txt

... skipping 453 lines ...
STEP: Fetching activity logs took 964.283232ms
STEP: Dumping all the Cluster API resources in the "create-workload-cluster-qpf1ot" namespace
STEP: Deleting all clusters in the create-workload-cluster-qpf1ot namespace
STEP: Deleting cluster capz-e2e-k6iejf
INFO: Waiting for the Cluster create-workload-cluster-qpf1ot/capz-e2e-k6iejf to be deleted
STEP: Waiting for cluster capz-e2e-k6iejf to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-tc2ph, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-k6iejf-control-plane-4qbl2, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-k6iejf-control-plane-qcqpr, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-k6iejf-control-plane-qcqpr, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-9j6jc, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-p4sr8, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-k6iejf-control-plane-4qbl2, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-k6iejf-control-plane-nkrrs, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-k6iejf-control-plane-nkrrs, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-k6iejf-control-plane-qcqpr, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-tnb8b, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-k6iejf-control-plane-nkrrs, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-2674j, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-fzblq, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-fz77t, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-nq9dn, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-8f59968d4-wx96s, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-wk28l, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-k6iejf-control-plane-qcqpr, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-k6iejf-control-plane-4qbl2, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-k6iejf-control-plane-4qbl2, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-wp7wx, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-k6iejf-control-plane-nkrrs, container etcd: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace create-workload-cluster-qpf1ot
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 23m19s on Ginkgo node 2 of 3


... skipping 118 lines ...
STEP: Fetching activity logs took 969.194337ms
STEP: Dumping all the Cluster API resources in the "create-workload-cluster-kpnyax" namespace
STEP: Deleting all clusters in the create-workload-cluster-kpnyax namespace
STEP: Deleting cluster capz-e2e-zi6d0s
INFO: Waiting for the Cluster create-workload-cluster-kpnyax/capz-e2e-zi6d0s to be deleted
STEP: Waiting for cluster capz-e2e-zi6d0s to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-lfxbl, container coredns: http2: client connection lost
W0304 22:54:23.439146   19077 reflector.go:436] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-zi6d0s-control-plane-2dl7v, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-zi6d0s-control-plane-2dl7v, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-8f59968d4-8rl94, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-rssxg, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-zi6d0s-control-plane-2dl7v, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-sk76r, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-zi6d0s-control-plane-2dl7v, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-5dd9t, container coredns: http2: client connection lost
I0304 22:54:54.240610   19077 trace.go:205] Trace[192921532]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167 (04-Mar-2021 22:54:24.239) (total time: 30000ms):
Trace[192921532]: [30.000703562s] [30.000703562s] END
E0304 22:54:54.240690   19077 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-kpnyax/events?resourceVersion=6123": dial tcp 52.249.60.32:6443: i/o timeout
I0304 22:55:25.881883   19077 trace.go:205] Trace[295931968]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167 (04-Mar-2021 22:54:55.881) (total time: 30000ms):
Trace[295931968]: [30.000643705s] [30.000643705s] END
E0304 22:55:25.881963   19077 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-kpnyax/events?resourceVersion=6123": dial tcp 52.249.60.32:6443: i/o timeout
I0304 22:56:00.115326   19077 trace.go:205] Trace[1503856756]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167 (04-Mar-2021 22:55:30.114) (total time: 30000ms):
Trace[1503856756]: [30.000722544s] [30.000722544s] END
E0304 22:56:00.115452   19077 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-kpnyax/events?resourceVersion=6123": dial tcp 52.249.60.32:6443: i/o timeout
I0304 22:56:36.886952   19077 trace.go:205] Trace[1072422181]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167 (04-Mar-2021 22:56:06.886) (total time: 30000ms):
Trace[1072422181]: [30.000609436s] [30.000609436s] END
E0304 22:56:36.887025   19077 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-kpnyax/events?resourceVersion=6123": dial tcp 52.249.60.32:6443: i/o timeout
I0304 22:57:24.813553   19077 trace.go:205] Trace[395976000]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167 (04-Mar-2021 22:56:54.812) (total time: 30000ms):
Trace[395976000]: [30.000720482s] [30.000720482s] END
E0304 22:57:24.813627   19077 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-kpnyax/events?resourceVersion=6123": dial tcp 52.249.60.32:6443: i/o timeout
I0304 22:58:45.172409   19077 trace.go:205] Trace[754055166]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167 (04-Mar-2021 22:58:15.171) (total time: 30000ms):
Trace[754055166]: [30.00069969s] [30.00069969s] END
E0304 22:58:45.172488   19077 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-kpnyax/events?resourceVersion=6123": dial tcp 52.249.60.32:6443: i/o timeout
E0304 22:59:33.251786   19077 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-kpnyax/events?resourceVersion=6123": dial tcp: lookup capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace create-workload-cluster-kpnyax
STEP: Redacting sensitive information from logs
E0304 23:00:12.764996   19077 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-kpnyax/events?resourceVersion=6123": dial tcp: lookup capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 36m46s on Ginkgo node 3 of 3


• [SLOW TEST:2206.009 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:39
... skipping 154 lines ...
STEP: waiting for job default/curl-to-elb-jobbgogt to be complete
STEP: connecting directly to the external LB service
2021/03/04 22:39:20 [DEBUG] GET http://13.85.196.10
STEP: deleting the test resources
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
2021/03/04 22:39:21 failed trying to get namespace (development):namespaces "development" not found
2021/03/04 22:39:21 namespace development does not exist, creating...
STEP: Creating production namespace
2021/03/04 22:39:21 failed trying to get namespace (production):namespaces "production" not found
2021/03/04 22:39:21 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
STEP: Ensure there is a running frontend-prod pod
STEP: Waiting for deployment production/frontend-prod-17591 to be available
STEP: Ensure there is a running frontend-dev pod
STEP: Waiting for deployment development/frontend-dev-117591 to be available
... skipping 6 lines ...
STEP: Ensuring we have outbound internet access from the backend pods
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.143.195 port 80: Connection timed out

STEP: Cleaning up after ourselves
STEP: Applying a network policy to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.143.195 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.143.195 port 80: Connection timed out

STEP: Cleaning up after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.143.196 port 80: Connection timed out

STEP: Cleaning up after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.143.197 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.143.196 port 80: Connection timed out

STEP: Cleaning up after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.143.195 port 80: Connection timed out

STEP: Cleaning up after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.143.195 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: creating Azure clients with the workload cluster's subscription
STEP: verifying EnableAcceleratedNetworking for the primary NIC of each VM
STEP: Dumping logs from the "capz-e2e-tfohpr" workload cluster
STEP: Dumping workload cluster create-workload-cluster-9wd6vd/capz-e2e-tfohpr logs
STEP: Dumping workload cluster create-workload-cluster-9wd6vd/capz-e2e-tfohpr kube-system pod logs
... skipping 21 lines ...
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-tfohpr-control-plane-7zrpj, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-nkll7, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-tfohpr-control-plane-swb87, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-bjmcv, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-pmjrq, container kube-proxy
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-tfohpr-control-plane-7r9v8, container etcd
STEP: Got error while iterating over activity logs for resource group capz-e2e-tfohpr: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000337837s
STEP: Dumping all the Cluster API resources in the "create-workload-cluster-9wd6vd" namespace
STEP: Deleting all clusters in the create-workload-cluster-9wd6vd namespace
STEP: Deleting cluster capz-e2e-tfohpr
INFO: Waiting for the Cluster create-workload-cluster-9wd6vd/capz-e2e-tfohpr to be deleted
STEP: Waiting for cluster capz-e2e-tfohpr to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-tfohpr-control-plane-7r9v8, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-5vcc2, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-tfohpr-control-plane-7r9v8, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-66hd7, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-tfohpr-control-plane-swb87, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5cm8t, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-bjmcv, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-8f59968d4-blz8m, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-tfohpr-control-plane-7zrpj, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-pmjrq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-4z7tk, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-tfohpr-control-plane-swb87, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-tfohpr-control-plane-7r9v8, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-k5zhc, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-pmkdk, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-tfohpr-control-plane-7zrpj, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-tfohpr-control-plane-swb87, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-tfohpr-control-plane-7zrpj, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-nkll7, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-dqp2m, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-tfohpr-control-plane-swb87, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-tfohpr-control-plane-7zrpj, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-tfohpr-control-plane-7r9v8, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-q95g7, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-4mxck, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace create-workload-cluster-9wd6vd
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 worker nodes" ran for 43m28s on Ginkgo node 1 of 3


... skipping 25 lines ...
azureclusteridentity.infrastructure.cluster.x-k8s.io/multi-tenancy-identity created
configmap/cni-capz-e2e-q64wjl-crs-0 created
clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-q64wjl-crs-0 created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E0304 23:01:09.832190   19077 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-kpnyax/events?resourceVersion=6123": dial tcp: lookup capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by create-workload-cluster-wv9z96/capz-e2e-q64wjl-control-plane to be provisioned
STEP: Waiting for one control plane node to exist
E0304 23:02:01.570580   19077 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-kpnyax/events?resourceVersion=6123": dial tcp: lookup capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0304 23:02:44.251831   19077 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-kpnyax/events?resourceVersion=6123": dial tcp: lookup capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0304 23:03:33.214375   19077 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-kpnyax/events?resourceVersion=6123": dial tcp: lookup capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0304 23:04:08.624117   19077 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-kpnyax/events?resourceVersion=6123": dial tcp: lookup capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0304 23:04:55.997830   19077 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-kpnyax/events?resourceVersion=6123": dial tcp: lookup capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be ready
INFO: Waiting for control plane create-workload-cluster-wv9z96/capz-e2e-q64wjl-control-plane to be ready (implies underlying nodes to be ready as well)
STEP: Waiting for the control plane to be ready
INFO: Waiting for the machine deployments to be provisioned
STEP: Waiting for the workload nodes to exist
E0304 23:05:49.269163   19077 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-kpnyax/events?resourceVersion=6123": dial tcp: lookup capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0304 23:06:46.853750   19077 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-kpnyax/events?resourceVersion=6123": dial tcp: lookup capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0304 23:07:34.171990   19077 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-kpnyax/events?resourceVersion=6123": dial tcp: lookup capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0304 23:08:10.113439   19077 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-kpnyax/events?resourceVersion=6123": dial tcp: lookup capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for the machine pools to be provisioned
STEP: creating Azure clients with the workload cluster's subscription
STEP: Dumping logs from the "capz-e2e-q64wjl" workload cluster
STEP: Dumping workload cluster create-workload-cluster-wv9z96/capz-e2e-q64wjl logs
E0304 23:08:45.071956   19077 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-kpnyax/events?resourceVersion=6123": dial tcp: lookup capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Dumping workload cluster create-workload-cluster-wv9z96/capz-e2e-q64wjl kube-system pod logs
STEP: Fetching kube-system pod logs took 390.62979ms
STEP: Dumping workload cluster create-workload-cluster-wv9z96/capz-e2e-q64wjl Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-f9fd979d6-vn2qq, container coredns
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-q64wjl-control-plane-24x75, container etcd
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-q64wjl-control-plane-24x75, container kube-apiserver
... skipping 2 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-s5xkd, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-5c5bh, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-m8vff, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-f9fd979d6-mk97q, container coredns
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-q64wjl-control-plane-24x75, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-2kqsz, container kube-proxy
STEP: Error starting logs stream for pod kube-system/calico-kube-controllers-8f59968d4-hmrtc, container calico-kube-controllers: container "calico-kube-controllers" in pod "calico-kube-controllers-8f59968d4-hmrtc" is waiting to start: ContainerCreating
STEP: Error starting logs stream for pod kube-system/coredns-f9fd979d6-vn2qq, container coredns: container "coredns" in pod "coredns-f9fd979d6-vn2qq" is waiting to start: ContainerCreating
STEP: Error starting logs stream for pod kube-system/coredns-f9fd979d6-mk97q, container coredns: container "coredns" in pod "coredns-f9fd979d6-mk97q" is waiting to start: ContainerCreating
STEP: Fetching activity logs took 473.51632ms
STEP: Dumping all the Cluster API resources in the "create-workload-cluster-wv9z96" namespace
STEP: Deleting all clusters in the create-workload-cluster-wv9z96 namespace
STEP: Deleting cluster capz-e2e-q64wjl
INFO: Waiting for the Cluster create-workload-cluster-wv9z96/capz-e2e-q64wjl to be deleted
STEP: Waiting for cluster capz-e2e-q64wjl to be deleted
E0304 23:09:29.205209   19077 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-kpnyax/events?resourceVersion=6123": dial tcp: lookup capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0304 23:10:08.051040   19077 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-kpnyax/events?resourceVersion=6123": dial tcp: lookup capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0304 23:11:06.080370   19077 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-kpnyax/events?resourceVersion=6123": dial tcp: lookup capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0304 23:11:57.362521   19077 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-kpnyax/events?resourceVersion=6123": dial tcp: lookup capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-q64wjl-control-plane-24x75, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-q64wjl-control-plane-24x75, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-q64wjl-control-plane-24x75, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-m8vff, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-q64wjl-control-plane-24x75, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-2kqsz, container kube-proxy: http2: client connection lost
E0304 23:12:51.818394   19077 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-kpnyax/events?resourceVersion=6123": dial tcp: lookup capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0304 23:13:42.508147   19077 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-kpnyax/events?resourceVersion=6123": dial tcp: lookup capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0304 23:14:18.921758   19077 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-kpnyax/events?resourceVersion=6123": dial tcp: lookup capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0304 23:15:01.884861   19077 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-kpnyax/events?resourceVersion=6123": dial tcp: lookup capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0304 23:15:48.656597   19077 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-kpnyax/events?resourceVersion=6123": dial tcp: lookup capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0304 23:16:45.564103   19077 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-kpnyax/events?resourceVersion=6123": dial tcp: lookup capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
E0304 23:17:42.080254   19077 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-kpnyax/events?resourceVersion=6123": dial tcp: lookup capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace create-workload-cluster-wv9z96
STEP: Redacting sensitive information from logs
E0304 23:18:41.648818   19077 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-kpnyax/events?resourceVersion=6123": dial tcp: lookup capz-e2e-zi6d0s-2b4724d.southcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a single control plane node and 1 node" ran for 18m13s on Ginkgo node 3 of 3


• [SLOW TEST:1093.432 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:39
... skipping 5 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating a VMSS cluster [It] with a single control plane node and an AzureMachinePool with 2 nodes 
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api@v0.3.11-0.20210209200458-51a6d64d171c/test/framework/machinepool_helpers.go:85

Ran 5 of 18 Specs in 3444.720 seconds
FAIL! -- 4 Passed | 1 Failed | 0 Pending | 13 Skipped


Ginkgo ran 1 suite in 58m32.322687363s
Test Suite Failed
make[1]: *** [Makefile:169: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:177: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...