This job view page is being replaced by Spyglass soon. Check out the new job view.
PRjsturtevant: Inject Windows custom binaries for use in PRs and running against Kubernetes CI
ResultFAILURE
Tests 1 failed / 3 succeeded
Started2021-07-26 17:45
Elapsed55m6s
Revision22717db0cff6596c4254cebe41ba8cd5a448aa5c
Refs 1388

Test Failures


capz-e2e Workload cluster creation With 3 control-plane nodes and 2 worker nodes 46m1s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sWith\s3\scontrol\-plane\snodes\sand\s2\sworker\snodes$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:169
Expected
    <bool>: false
to equal
    <bool>: true
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_accelnet.go:93
				
				Click to see stdout/stderrfrom junit.e2e_suite.2.xml

Filter through log files | View test history on testgrid


Show 3 Passed Tests

Show 18 Skipped Tests

Error lines from build-log.txt

... skipping 432 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:243

INFO: "With ipv6 worker node" started at Mon, 26 Jul 2021 17:53:43 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-1k7vwb" for hosting the cluster
Jul 26 17:53:43.929: INFO: starting to create namespace for hosting the "capz-e2e-1k7vwb" test spec
2021/07/26 17:53:43 failed trying to get namespace (capz-e2e-1k7vwb):namespaces "capz-e2e-1k7vwb" not found
INFO: Creating namespace capz-e2e-1k7vwb
INFO: Creating event watcher for namespace "capz-e2e-1k7vwb"
INFO: Cluster name is capz-e2e-1k7vwb-ipv6
INFO: Creating the workload cluster with name "capz-e2e-1k7vwb-ipv6" using the "ipv6" template (Kubernetes v1.21.2, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
INFO: clusterctl config cluster capz-e2e-1k7vwb-ipv6 --infrastructure (default) --kubernetes-version v1.21.2 --control-plane-machine-count 3 --worker-machine-count 1 --flavor ipv6
... skipping 92 lines ...
STEP: Fetching activity logs took 1.008390662s
STEP: Dumping all the Cluster API resources in the "capz-e2e-1k7vwb" namespace
STEP: Deleting all clusters in the capz-e2e-1k7vwb namespace
STEP: Deleting cluster capz-e2e-1k7vwb-ipv6
INFO: Waiting for the Cluster capz-e2e-1k7vwb/capz-e2e-1k7vwb-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-1k7vwb-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-1k7vwb-ipv6-control-plane-pkh48, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-1k7vwb-ipv6-control-plane-xm67g, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-srqpt, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-1k7vwb-ipv6-control-plane-pkh48, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-gsnns, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-1k7vwb-ipv6-control-plane-qmltx, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-1k7vwb-ipv6-control-plane-pkh48, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-bt4dg, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-zf8xf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-qmpnw, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-1k7vwb-ipv6-control-plane-xm67g, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-1k7vwb-ipv6-control-plane-xm67g, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-c5znc, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-4hxwj, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-1k7vwb-ipv6-control-plane-qmltx, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vrkw2, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-1k7vwb-ipv6-control-plane-qmltx, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-1k7vwb-ipv6-control-plane-xm67g, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-ld5pq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-1k7vwb-ipv6-control-plane-qmltx, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-784b4f4c9-7z5bw, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-tmwbv, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-1k7vwb-ipv6-control-plane-pkh48, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-1k7vwb
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 21m25s on Ginkgo node 3 of 3


... skipping 9 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:108

INFO: "Creates a public management cluster in the same vnet" started at Mon, 26 Jul 2021 17:53:43 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-c01eb4" for hosting the cluster
Jul 26 17:53:43.914: INFO: starting to create namespace for hosting the "capz-e2e-c01eb4" test spec
2021/07/26 17:53:43 failed trying to get namespace (capz-e2e-c01eb4):namespaces "capz-e2e-c01eb4" not found
INFO: Creating namespace capz-e2e-c01eb4
INFO: Creating event watcher for namespace "capz-e2e-c01eb4"
INFO: Cluster name is capz-e2e-c01eb4-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
STEP: creating a network security group
... skipping 102 lines ...
STEP: Fetching activity logs took 1.181473509s
STEP: Dumping all the Cluster API resources in the "capz-e2e-c01eb4" namespace
STEP: Deleting all clusters in the capz-e2e-c01eb4 namespace
STEP: Deleting cluster capz-e2e-c01eb4-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-c01eb4/capz-e2e-c01eb4-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-c01eb4-public-custom-vnet to be deleted
W0726 18:18:10.730116   23578 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I0726 18:18:41.772898   23578 trace.go:205] Trace[1106410694]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167 (26-Jul-2021 18:18:11.771) (total time: 30001ms):
Trace[1106410694]: [30.001664753s] [30.001664753s] END
E0726 18:18:41.772990   23578 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-c01eb4/events?resourceVersion=4225": dial tcp 20.63.61.58:6443: i/o timeout
I0726 18:19:14.676973   23578 trace.go:205] Trace[460128162]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167 (26-Jul-2021 18:18:44.675) (total time: 30001ms):
Trace[460128162]: [30.001781455s] [30.001781455s] END
E0726 18:19:14.677060   23578 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-c01eb4/events?resourceVersion=4225": dial tcp 20.63.61.58:6443: i/o timeout
I0726 18:19:49.096895   23578 trace.go:205] Trace[683024728]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167 (26-Jul-2021 18:19:19.095) (total time: 30001ms):
Trace[683024728]: [30.001406335s] [30.001406335s] END
E0726 18:19:49.096973   23578 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-c01eb4/events?resourceVersion=4225": dial tcp 20.63.61.58:6443: i/o timeout
I0726 18:20:28.500458   23578 trace.go:205] Trace[607811211]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167 (26-Jul-2021 18:19:58.499) (total time: 30000ms):
Trace[607811211]: [30.000768766s] [30.000768766s] END
E0726 18:20:28.500533   23578 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-c01eb4/events?resourceVersion=4225": dial tcp 20.63.61.58:6443: i/o timeout
I0726 18:21:15.055451   23578 trace.go:205] Trace[469339106]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167 (26-Jul-2021 18:20:45.053) (total time: 30002ms):
Trace[469339106]: [30.002044916s] [30.002044916s] END
E0726 18:21:15.055553   23578 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-c01eb4/events?resourceVersion=4225": dial tcp 20.63.61.58:6443: i/o timeout
I0726 18:22:15.858248   23578 trace.go:205] Trace[774965466]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167 (26-Jul-2021 18:21:45.857) (total time: 30000ms):
Trace[774965466]: [30.000933256s] [30.000933256s] END
E0726 18:22:15.858320   23578 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-c01eb4/events?resourceVersion=4225": dial tcp 20.63.61.58:6443: i/o timeout
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-c01eb4
STEP: Redacting sensitive information from logs
INFO: "Creates a public management cluster in the same vnet" ran for 29m14s on Ginkgo node 1 of 3


... skipping 11 lines ...
  with a single control plane node and an AzureMachinePool with 2 nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:289

INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" started at Mon, 26 Jul 2021 18:15:08 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-2g63ek" for hosting the cluster
Jul 26 18:15:08.975: INFO: starting to create namespace for hosting the "capz-e2e-2g63ek" test spec
2021/07/26 18:15:08 failed trying to get namespace (capz-e2e-2g63ek):namespaces "capz-e2e-2g63ek" not found
INFO: Creating namespace capz-e2e-2g63ek
INFO: Creating event watcher for namespace "capz-e2e-2g63ek"
INFO: Cluster name is capz-e2e-2g63ek-vmss
INFO: Creating the workload cluster with name "capz-e2e-2g63ek-vmss" using the "machine-pool" template (Kubernetes v1.21.2, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
INFO: clusterctl config cluster capz-e2e-2g63ek-vmss --infrastructure (default) --kubernetes-version v1.21.2 --control-plane-machine-count 1 --worker-machine-count 2 --flavor machine-pool
... skipping 51 lines ...
STEP: waiting for job default/curl-to-elb-jobm656lza1iil to be complete
Jul 26 18:21:50.263: INFO: waiting for job default/curl-to-elb-jobm656lza1iil to be complete
Jul 26 18:22:00.334: INFO: job default/curl-to-elb-jobm656lza1iil is complete, took 10.070483264s
STEP: connecting directly to the external LB service
Jul 26 18:22:00.334: INFO: starting attempts to connect directly to the external LB service
2021/07/26 18:22:00 [DEBUG] GET http://52.228.100.15
2021/07/26 18:22:30 [ERR] GET http://52.228.100.15 request failed: Get "http://52.228.100.15": dial tcp 52.228.100.15:80: i/o timeout
2021/07/26 18:22:30 [DEBUG] GET http://52.228.100.15: retrying in 1s (4 left)
Jul 26 18:22:31.401: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Jul 26 18:22:31.401: INFO: starting to delete external LB service web9j2e61-elb
Jul 26 18:22:31.457: INFO: starting to delete deployment web9j2e61
Jul 26 18:22:31.490: INFO: starting to delete job curl-to-elb-jobm656lza1iil
... skipping 64 lines ...
  With 3 control-plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:169

INFO: "With 3 control-plane nodes and 2 worker nodes" started at Mon, 26 Jul 2021 17:53:43 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-djz2jd" for hosting the cluster
Jul 26 17:53:43.929: INFO: starting to create namespace for hosting the "capz-e2e-djz2jd" test spec
2021/07/26 17:53:43 failed trying to get namespace (capz-e2e-djz2jd):namespaces "capz-e2e-djz2jd" not found
INFO: Creating namespace capz-e2e-djz2jd
INFO: Creating event watcher for namespace "capz-e2e-djz2jd"
INFO: Cluster name is capz-e2e-djz2jd-ha
INFO: Creating the workload cluster with name "capz-e2e-djz2jd-ha" using the "(default)" template (Kubernetes v1.21.2, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
INFO: clusterctl config cluster capz-e2e-djz2jd-ha --infrastructure (default) --kubernetes-version v1.21.2 --control-plane-machine-count 3 --worker-machine-count 2 --flavor (default)
... skipping 64 lines ...
Jul 26 18:02:24.423: INFO: starting to delete external LB service web5kn0um-elb
Jul 26 18:02:24.560: INFO: starting to delete deployment web5kn0um
Jul 26 18:02:24.604: INFO: starting to delete job curl-to-elb-jobuv3r0ndoutu
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Jul 26 18:02:24.716: INFO: starting to create dev deployment namespace
2021/07/26 18:02:24 failed trying to get namespace (development):namespaces "development" not found
2021/07/26 18:02:24 namespace development does not exist, creating...
STEP: Creating production namespace
Jul 26 18:02:24.801: INFO: starting to create prod deployment namespace
2021/07/26 18:02:24 failed trying to get namespace (production):namespaces "production" not found
2021/07/26 18:02:24 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Jul 26 18:02:24.880: INFO: starting to create frontend-prod deployments
Jul 26 18:02:24.925: INFO: starting to create frontend-dev deployments
Jul 26 18:02:24.976: INFO: starting to create backend deployments
Jul 26 18:02:25.029: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Jul 26 18:02:48.286: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.218.3 port 80: Connection timed out

STEP: Cleaning up after ourselves
Jul 26 18:04:58.748: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Jul 26 18:04:58.955: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.218.3 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.218.3 port 80: Connection timed out

STEP: Cleaning up after ourselves
Jul 26 18:09:20.894: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Jul 26 18:09:21.125: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.218.4 port 80: Connection timed out

STEP: Cleaning up after ourselves
Jul 26 18:11:31.962: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Jul 26 18:11:32.187: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.132.195 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.218.4 port 80: Connection timed out

STEP: Cleaning up after ourselves
Jul 26 18:15:54.106: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Jul 26 18:15:54.280: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.218.3 port 80: Connection timed out

STEP: Cleaning up after ourselves
Jul 26 18:18:05.178: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Jul 26 18:18:05.348: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.218.3 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: creating Azure clients with the workload cluster's subscription
STEP: verifying EnableAcceleratedNetworking for the primary NIC of each VM
STEP: Dumping logs from the "capz-e2e-djz2jd-ha" workload cluster
STEP: Dumping workload cluster capz-e2e-djz2jd/capz-e2e-djz2jd-ha logs
Jul 26 18:20:18.003: INFO: INFO: Collecting logs for node capz-e2e-djz2jd-ha-control-plane-p5pgg in cluster capz-e2e-djz2jd-ha in namespace capz-e2e-djz2jd
... skipping 47 lines ...
STEP: Fetching activity logs took 985.512668ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-djz2jd" namespace
STEP: Deleting all clusters in the capz-e2e-djz2jd namespace
STEP: Deleting cluster capz-e2e-djz2jd-ha
INFO: Waiting for the Cluster capz-e2e-djz2jd/capz-e2e-djz2jd-ha to be deleted
STEP: Waiting for cluster capz-e2e-djz2jd-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-djz2jd-ha-control-plane-v2xkb, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-f7ljg, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-djz2jd-ha-control-plane-z6lpb, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-784b4f4c9-x9kkz, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-djz2jd-ha-control-plane-v2xkb, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-8748d, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-djz2jd-ha-control-plane-v2xkb, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-sp52d, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-v2f65, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-djz2jd-ha-control-plane-z6lpb, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-djz2jd-ha-control-plane-z6lpb, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-djz2jd-ha-control-plane-z6lpb, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-djz2jd-ha-control-plane-v2xkb, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-ddxlb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-m4q5x, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-djz2jd
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 worker nodes" ran for 46m1s on Ginkgo node 2 of 3


... skipping 47 lines ...
  	/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/e2e_suite_test.go:256 +0x1f7
  testing.tRunner(0xc000855c80, 0x22cca10)
  	/usr/local/go/src/testing/testing.go:1193 +0xef
  created by testing.(*T).Run
  	/usr/local/go/src/testing/testing.go:1238 +0x2b3
------------------------------
E0726 18:23:03.036419   23578 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-c01eb4/events?resourceVersion=4225": dial tcp: lookup capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com on 10.63.240.10:53: no such host
E0726 18:23:41.844290   23578 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-c01eb4/events?resourceVersion=4225": dial tcp: lookup capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com on 10.63.240.10:53: no such host
E0726 18:24:20.813505   23578 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-c01eb4/events?resourceVersion=4225": dial tcp: lookup capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com on 10.63.240.10:53: no such host
E0726 18:25:13.448990   23578 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-c01eb4/events?resourceVersion=4225": dial tcp: lookup capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com on 10.63.240.10:53: no such host
E0726 18:25:49.697721   23578 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-c01eb4/events?resourceVersion=4225": dial tcp: lookup capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com on 10.63.240.10:53: no such host
E0726 18:26:45.722906   23578 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-c01eb4/events?resourceVersion=4225": dial tcp: lookup capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com on 10.63.240.10:53: no such host
E0726 18:27:36.723151   23578 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-c01eb4/events?resourceVersion=4225": dial tcp: lookup capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com on 10.63.240.10:53: no such host
E0726 18:28:22.501016   23578 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-c01eb4/events?resourceVersion=4225": dial tcp: lookup capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com on 10.63.240.10:53: no such host
E0726 18:28:53.485788   23578 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-c01eb4/events?resourceVersion=4225": dial tcp: lookup capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com on 10.63.240.10:53: no such host
E0726 18:29:28.330605   23578 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-c01eb4/events?resourceVersion=4225": dial tcp: lookup capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com on 10.63.240.10:53: no such host
E0726 18:30:16.610609   23578 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-c01eb4/events?resourceVersion=4225": dial tcp: lookup capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com on 10.63.240.10:53: no such host
E0726 18:31:15.927039   23578 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-c01eb4/events?resourceVersion=4225": dial tcp: lookup capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com on 10.63.240.10:53: no such host
E0726 18:31:48.350540   23578 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-c01eb4/events?resourceVersion=4225": dial tcp: lookup capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com on 10.63.240.10:53: no such host
E0726 18:32:36.295773   23578 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-c01eb4/events?resourceVersion=4225": dial tcp: lookup capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com on 10.63.240.10:53: no such host
E0726 18:33:08.124883   23578 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-c01eb4/events?resourceVersion=4225": dial tcp: lookup capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com on 10.63.240.10:53: no such host
E0726 18:33:58.899860   23578 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-c01eb4/events?resourceVersion=4225": dial tcp: lookup capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com on 10.63.240.10:53: no such host
E0726 18:34:38.007378   23578 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-c01eb4/events?resourceVersion=4225": dial tcp: lookup capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com on 10.63.240.10:53: no such host
E0726 18:35:13.237244   23578 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-c01eb4/events?resourceVersion=4225": dial tcp: lookup capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com on 10.63.240.10:53: no such host
E0726 18:35:59.574082   23578 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-c01eb4/events?resourceVersion=4225": dial tcp: lookup capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com on 10.63.240.10:53: no such host
E0726 18:36:45.951940   23578 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-c01eb4/events?resourceVersion=4225": dial tcp: lookup capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com on 10.63.240.10:53: no such host
E0726 18:37:24.406085   23578 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-c01eb4/events?resourceVersion=4225": dial tcp: lookup capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com on 10.63.240.10:53: no such host
E0726 18:38:07.164918   23578 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-c01eb4/events?resourceVersion=4225": dial tcp: lookup capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com on 10.63.240.10:53: no such host
E0726 18:38:53.100031   23578 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-c01eb4/events?resourceVersion=4225": dial tcp: lookup capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com on 10.63.240.10:53: no such host
E0726 18:39:30.837919   23578 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-c01eb4/events?resourceVersion=4225": dial tcp: lookup capz-e2e-c01eb4-public-custom-vnet-b4260091.canadacentral.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Workload cluster creation [It] With 3 control-plane nodes and 2 worker nodes 
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_accelnet.go:93

Ran 4 of 22 Specs in 2915.949 seconds
FAIL! -- 3 Passed | 1 Failed | 0 Pending | 18 Skipped


Ginkgo ran 1 suite in 50m8.168311223s
Test Suite Failed
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...