This job view page is being replaced by Spyglass soon. Check out the new job view.
PRjsturtevant: Inject Windows custom binaries for use in PRs and running against Kubernetes CI
ResultFAILURE
Tests 1 failed / 3 succeeded
Started2021-07-28 16:13
Elapsed1h7m
Revision63c2f5161eb5ad44f8c708cf6f8e6caa9d681e2d
Refs 1388

Test Failures


capz-e2e Workload cluster creation With 3 control-plane nodes and 2 worker nodes 49m7s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sWith\s3\scontrol\-plane\snodes\sand\s2\sworker\snodes$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:183
Expected
    <bool>: false
to equal
    <bool>: true
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_accelnet.go:93
				
				Click to see stdout/stderrfrom junit.e2e_suite.2.xml

Filter through log files | View test history on testgrid


Show 3 Passed Tests

Show 18 Skipped Tests

Error lines from build-log.txt

... skipping 431 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:257

INFO: "With ipv6 worker node" started at Wed, 28 Jul 2021 16:20:43 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-qn340o" for hosting the cluster
Jul 28 16:20:43.743: INFO: starting to create namespace for hosting the "capz-e2e-qn340o" test spec
2021/07/28 16:20:43 failed trying to get namespace (capz-e2e-qn340o):namespaces "capz-e2e-qn340o" not found
INFO: Creating namespace capz-e2e-qn340o
INFO: Creating event watcher for namespace "capz-e2e-qn340o"
INFO: Cluster name is capz-e2e-qn340o-ipv6
INFO: Creating the workload cluster with name "capz-e2e-qn340o-ipv6" using the "ipv6" template (Kubernetes v1.21.2, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
INFO: clusterctl config cluster capz-e2e-qn340o-ipv6 --infrastructure (default) --kubernetes-version v1.21.2 --control-plane-machine-count 3 --worker-machine-count 1 --flavor ipv6
... skipping 92 lines ...
STEP: Fetching activity logs took 1.055285838s
STEP: Dumping all the Cluster API resources in the "capz-e2e-qn340o" namespace
STEP: Deleting all clusters in the capz-e2e-qn340o namespace
STEP: Deleting cluster capz-e2e-qn340o-ipv6
INFO: Waiting for the Cluster capz-e2e-qn340o/capz-e2e-qn340o-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-qn340o-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-qn340o-ipv6-control-plane-56994, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-784b4f4c9-fz6rp, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-qn340o-ipv6-control-plane-99mcm, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-g2fn4, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-qn340o-ipv6-control-plane-56994, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rxgn7, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-qn340o-ipv6-control-plane-hmqjf, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-qn340o-ipv6-control-plane-hmqjf, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-nvkjq, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-qn340o-ipv6-control-plane-99mcm, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-qn340o-ipv6-control-plane-99mcm, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-g2dzg, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-tz57q, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-5tp4q, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-jr9m9, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-bt8xv, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-qn340o-ipv6-control-plane-56994, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-ntgkb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-qn340o-ipv6-control-plane-hmqjf, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-qn340o-ipv6-control-plane-hmqjf, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-qn340o-ipv6-control-plane-56994, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-jmgbc, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-qn340o-ipv6-control-plane-99mcm, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-qn340o
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 24m43s on Ginkgo node 3 of 3

... skipping 10 lines ...
  with a single control plane node and an AzureMachinePool with 2 nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:303

INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" started at Wed, 28 Jul 2021 16:45:26 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-mcizce" for hosting the cluster
Jul 28 16:45:26.889: INFO: starting to create namespace for hosting the "capz-e2e-mcizce" test spec
2021/07/28 16:45:26 failed trying to get namespace (capz-e2e-mcizce):namespaces "capz-e2e-mcizce" not found
INFO: Creating namespace capz-e2e-mcizce
INFO: Creating event watcher for namespace "capz-e2e-mcizce"
INFO: Cluster name is capz-e2e-mcizce-vmss
INFO: Creating the workload cluster with name "capz-e2e-mcizce-vmss" using the "machine-pool" template (Kubernetes v1.21.2, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
INFO: clusterctl config cluster capz-e2e-mcizce-vmss --infrastructure (default) --kubernetes-version v1.21.2 --control-plane-machine-count 1 --worker-machine-count 2 --flavor machine-pool
... skipping 105 lines ...
STEP: Fetching activity logs took 566.412705ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-mcizce" namespace
STEP: Deleting all clusters in the capz-e2e-mcizce namespace
STEP: Deleting cluster capz-e2e-mcizce-vmss
INFO: Waiting for the Cluster capz-e2e-mcizce/capz-e2e-mcizce-vmss to be deleted
STEP: Waiting for cluster capz-e2e-mcizce-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-mcizce-vmss-control-plane-8n52w, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-mcizce-vmss-control-plane-8n52w, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-krk8x, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-mcizce-vmss-control-plane-8n52w, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-b45pb, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-mcizce-vmss-control-plane-8n52w, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-z5k6b, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-s4z6b, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-n9kdc, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-mksh7, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-784b4f4c9-rz85v, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-f9wnx, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-hvf7n, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-mcizce
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" ran for 20m39s on Ginkgo node 3 of 3

... skipping 12 lines ...
  With 3 control-plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:183

INFO: "With 3 control-plane nodes and 2 worker nodes" started at Wed, 28 Jul 2021 16:20:43 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-nvc3gt" for hosting the cluster
Jul 28 16:20:43.740: INFO: starting to create namespace for hosting the "capz-e2e-nvc3gt" test spec
2021/07/28 16:20:43 failed trying to get namespace (capz-e2e-nvc3gt):namespaces "capz-e2e-nvc3gt" not found
INFO: Creating namespace capz-e2e-nvc3gt
INFO: Creating event watcher for namespace "capz-e2e-nvc3gt"
INFO: Cluster name is capz-e2e-nvc3gt-ha
INFO: Creating the workload cluster with name "capz-e2e-nvc3gt-ha" using the "(default)" template (Kubernetes v1.21.2, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
INFO: clusterctl config cluster capz-e2e-nvc3gt-ha --infrastructure (default) --kubernetes-version v1.21.2 --control-plane-machine-count 3 --worker-machine-count 2 --flavor (default)
... skipping 56 lines ...
STEP: waiting for job default/curl-to-elb-jobo75pt1l1u97 to be complete
Jul 28 16:35:15.646: INFO: waiting for job default/curl-to-elb-jobo75pt1l1u97 to be complete
Jul 28 16:35:25.756: INFO: job default/curl-to-elb-jobo75pt1l1u97 is complete, took 10.110547714s
STEP: connecting directly to the external LB service
Jul 28 16:35:25.756: INFO: starting attempts to connect directly to the external LB service
2021/07/28 16:35:25 [DEBUG] GET http://13.64.169.216
2021/07/28 16:35:55 [ERR] GET http://13.64.169.216 request failed: Get "http://13.64.169.216": dial tcp 13.64.169.216:80: i/o timeout
2021/07/28 16:35:55 [DEBUG] GET http://13.64.169.216: retrying in 1s (4 left)
Jul 28 16:35:56.865: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Jul 28 16:35:56.866: INFO: starting to delete external LB service webcuvynu-elb
Jul 28 16:35:56.973: INFO: starting to delete deployment webcuvynu
Jul 28 16:35:57.035: INFO: starting to delete job curl-to-elb-jobo75pt1l1u97
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Jul 28 16:35:57.147: INFO: starting to create dev deployment namespace
2021/07/28 16:35:57 failed trying to get namespace (development):namespaces "development" not found
2021/07/28 16:35:57 namespace development does not exist, creating...
STEP: Creating production namespace
Jul 28 16:35:57.264: INFO: starting to create prod deployment namespace
2021/07/28 16:35:57 failed trying to get namespace (production):namespaces "production" not found
2021/07/28 16:35:57 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Jul 28 16:35:57.381: INFO: starting to create frontend-prod deployments
Jul 28 16:35:57.442: INFO: starting to create frontend-dev deployments
Jul 28 16:35:57.523: INFO: starting to create backend deployments
Jul 28 16:35:57.594: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Jul 28 16:36:21.484: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.21.2 port 80: Connection timed out

STEP: Cleaning up after ourselves
Jul 28 16:38:33.183: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Jul 28 16:38:33.411: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.21.2 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.21.2 port 80: Connection timed out

STEP: Cleaning up after ourselves
Jul 28 16:42:54.971: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Jul 28 16:42:55.242: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.21.3 port 80: Connection timed out

STEP: Cleaning up after ourselves
Jul 28 16:45:06.395: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Jul 28 16:45:06.627: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.76.5 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.21.3 port 80: Connection timed out

STEP: Cleaning up after ourselves
Jul 28 16:49:28.540: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Jul 28 16:49:28.787: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.21.2 port 80: Connection timed out

STEP: Cleaning up after ourselves
Jul 28 16:51:39.612: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Jul 28 16:51:39.851: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.21.2 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: creating Azure clients with the workload cluster's subscription
STEP: verifying EnableAcceleratedNetworking for the primary NIC of each VM
STEP: Dumping logs from the "capz-e2e-nvc3gt-ha" workload cluster
STEP: Dumping workload cluster capz-e2e-nvc3gt/capz-e2e-nvc3gt-ha logs
Jul 28 16:53:51.446: INFO: INFO: Collecting logs for node capz-e2e-nvc3gt-ha-control-plane-twl8j in cluster capz-e2e-nvc3gt-ha in namespace capz-e2e-nvc3gt
... skipping 41 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-kmthp, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-nvc3gt-ha-control-plane-t2sjz, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-xlk7d, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-lqghn, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-nvc3gt-ha-control-plane-twl8j, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-nvc3gt-ha-control-plane-8qhnm, container kube-scheduler
STEP: Got error while iterating over activity logs for resource group capz-e2e-nvc3gt-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000629201s
STEP: Dumping all the Cluster API resources in the "capz-e2e-nvc3gt" namespace
STEP: Deleting all clusters in the capz-e2e-nvc3gt namespace
STEP: Deleting cluster capz-e2e-nvc3gt-ha
INFO: Waiting for the Cluster capz-e2e-nvc3gt/capz-e2e-nvc3gt-ha to be deleted
STEP: Waiting for cluster capz-e2e-nvc3gt-ha to be deleted
... skipping 61 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:122

INFO: "Creates a public management cluster in the same vnet" started at Wed, 28 Jul 2021 16:20:43 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-i44jnh" for hosting the cluster
Jul 28 16:20:43.733: INFO: starting to create namespace for hosting the "capz-e2e-i44jnh" test spec
2021/07/28 16:20:43 failed trying to get namespace (capz-e2e-i44jnh):namespaces "capz-e2e-i44jnh" not found
INFO: Creating namespace capz-e2e-i44jnh
INFO: Creating event watcher for namespace "capz-e2e-i44jnh"
INFO: Cluster name is capz-e2e-i44jnh-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
STEP: creating a network security group
... skipping 99 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-qvrvw, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-95qqt, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-784b4f4c9-lgj7r, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-proxy-7xfgb, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-i44jnh-public-custom-vnet-control-plane-429qn, container kube-apiserver
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-i44jnh-public-custom-vnet-control-plane-429qn, container etcd
STEP: Got error while iterating over activity logs for resource group capz-e2e-i44jnh-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001051205s
STEP: Dumping all the Cluster API resources in the "capz-e2e-i44jnh" namespace
STEP: Deleting all clusters in the capz-e2e-i44jnh namespace
STEP: Deleting cluster capz-e2e-i44jnh-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-i44jnh/capz-e2e-i44jnh-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-i44jnh-public-custom-vnet to be deleted
W0728 17:13:05.735977   23548 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I0728 17:13:36.713743   23548 trace.go:205] Trace[436340495]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167 (28-Jul-2021 17:13:06.711) (total time: 30001ms):
Trace[436340495]: [30.001820088s] [30.001820088s] END
E0728 17:13:36.713813   23548 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-i44jnh-public-custom-vnet-fffcf205.westus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-i44jnh/events?resourceVersion=8566": dial tcp 23.100.37.157:6443: i/o timeout
I0728 17:14:08.893012   23548 trace.go:205] Trace[1225511528]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167 (28-Jul-2021 17:13:38.891) (total time: 30001ms):
Trace[1225511528]: [30.001493392s] [30.001493392s] END
E0728 17:14:08.893086   23548 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-i44jnh-public-custom-vnet-fffcf205.westus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-i44jnh/events?resourceVersion=8566": dial tcp 23.100.37.157:6443: i/o timeout
I0728 17:14:44.854363   23548 trace.go:205] Trace[629458047]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167 (28-Jul-2021 17:14:14.853) (total time: 30001ms):
Trace[629458047]: [30.001130013s] [30.001130013s] END
E0728 17:14:44.854448   23548 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-i44jnh-public-custom-vnet-fffcf205.westus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-i44jnh/events?resourceVersion=8566": dial tcp 23.100.37.157:6443: i/o timeout
I0728 17:15:23.157949   23548 trace.go:205] Trace[1616138287]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167 (28-Jul-2021 17:14:53.156) (total time: 30001ms):
Trace[1616138287]: [30.001492081s] [30.001492081s] END
E0728 17:15:23.158022   23548 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-i44jnh-public-custom-vnet-fffcf205.westus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-i44jnh/events?resourceVersion=8566": dial tcp 23.100.37.157:6443: i/o timeout
I0728 17:16:08.604718   23548 trace.go:205] Trace[1858292790]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167 (28-Jul-2021 17:15:38.603) (total time: 30001ms):
Trace[1858292790]: [30.001310405s] [30.001310405s] END
E0728 17:16:08.604787   23548 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-i44jnh-public-custom-vnet-fffcf205.westus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-i44jnh/events?resourceVersion=8566": dial tcp 23.100.37.157:6443: i/o timeout
I0728 17:17:22.041738   23548 trace.go:205] Trace[60780408]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167 (28-Jul-2021 17:16:52.041) (total time: 30000ms):
Trace[60780408]: [30.000673644s] [30.000673644s] END
E0728 17:17:22.041815   23548 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-i44jnh-public-custom-vnet-fffcf205.westus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-i44jnh/events?resourceVersion=8566": dial tcp 23.100.37.157:6443: i/o timeout
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-i44jnh
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Jul 28 17:17:41.833: INFO: deleting an existing virtual network "custom-vnet"
Jul 28 17:17:52.647: INFO: deleting an existing route table "node-routetable"
E0728 17:17:56.837530   23548 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-i44jnh-public-custom-vnet-fffcf205.westus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-i44jnh/events?resourceVersion=8566": dial tcp: lookup capz-e2e-i44jnh-public-custom-vnet-fffcf205.westus.cloudapp.azure.com on 10.63.240.10:53: no such host
Jul 28 17:18:03.096: INFO: deleting an existing network security group "node-nsg"
Jul 28 17:18:13.417: INFO: deleting an existing network security group "control-plane-nsg"
Jul 28 17:18:23.810: INFO: verifying the existing resource group "capz-e2e-i44jnh-public-custom-vnet" is empty
Jul 28 17:18:23.964: INFO: deleting the existing resource group "capz-e2e-i44jnh-public-custom-vnet"
E0728 17:18:45.107240   23548 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-i44jnh-public-custom-vnet-fffcf205.westus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-i44jnh/events?resourceVersion=8566": dial tcp: lookup capz-e2e-i44jnh-public-custom-vnet-fffcf205.westus.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E0728 17:19:44.379468   23548 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-i44jnh-public-custom-vnet-fffcf205.westus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-i44jnh/events?resourceVersion=8566": dial tcp: lookup capz-e2e-i44jnh-public-custom-vnet-fffcf205.westus.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 59m10s on Ginkgo node 1 of 3


• [SLOW TEST:3549.976 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:41
... skipping 5 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Workload cluster creation [It] With 3 control-plane nodes and 2 worker nodes 
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_accelnet.go:93

Ran 4 of 22 Specs in 3691.411 seconds
FAIL! -- 3 Passed | 1 Failed | 0 Pending | 18 Skipped


Ginkgo ran 1 suite in 1h2m56.543254s
Test Suite Failed
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...