This job view page is being replaced by Spyglass soon. Check out the new job view.
PRshysank: v1beta1 cluster upgrade tests (using clusterctl upgrade)
ResultFAILURE
Tests 1 failed / 11 succeeded
Started2021-10-14 00:28
Elapsed2h1m
Revision12667db5247cc55c0ed42769571c8ca77a3a219a
Refs 1771

Test Failures


capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation 33m5s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sRunning\sthe\sCluster\sAPI\sE2E\stests\sShould\ssuccessfully\sremediate\sunhealthy\smachines\swith\sMachineHealthCheck\sShould\ssuccessfully\strigger\sKCP\sremediation$'
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/e2e/mhc_remediations.go:115
Failed to get controller-runtime client
Unexpected error:
    <*url.Error | 0xc000d61f80>: {
        Op: "Get",
        URL: "https://mhc-remediation-24c82p-1f6ca03b.uksouth.cloudapp.azure.com:6443/api?timeout=32s",
        Err: <*net.OpError | 0xc000d52fa0>{
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: <*net.TCPAddr | 0xc000582150>{
                IP: [20, 108, 60, 135],
                Port: 6443,
                Zone: "",
            },
            Err: <*os.SyscallError | 0xc001bf28a0>{
                Syscall: "connect",
                Err: <syscall.Errno>0x6f,
            },
        },
    }
    Get "https://mhc-remediation-24c82p-1f6ca03b.uksouth.cloudapp.azure.com:6443/api?timeout=32s": dial tcp 20.108.60.135:6443: connect: connection refused
occurred
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/framework/cluster_proxy.go:171
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 11 Passed Tests

Show 10 Skipped Tests

Error lines from build-log.txt

... skipping 489 lines ...
STEP: Fetching activity logs took 709.855584ms
STEP: Dumping all the Cluster API resources in the "quick-start-24yf14" namespace
STEP: Deleting cluster quick-start-24yf14/quick-start-bm54lk
STEP: Deleting cluster quick-start-bm54lk
INFO: Waiting for the Cluster quick-start-24yf14/quick-start-bm54lk to be deleted
STEP: Waiting for cluster quick-start-bm54lk to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-quick-start-bm54lk-control-plane-t65pk, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-874hg, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-quick-start-bm54lk-control-plane-t65pk, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-quick-start-bm54lk-control-plane-t65pk, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-wmjf6, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-fz95g, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-mhplb, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-9h6g7, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-vvr8q, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-mzhnn, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-quick-start-bm54lk-control-plane-t65pk, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "quick-start" test spec
INFO: Deleting namespace quick-start-24yf14
STEP: Redacting sensitive information from logs


• [SLOW TEST:723.836 seconds]
... skipping 52 lines ...
STEP: Dumping logs from the "kcp-upgrade-k4y06a" workload cluster
STEP: Dumping workload cluster kcp-upgrade-voktlf/kcp-upgrade-k4y06a logs
Oct 14 00:49:05.831: INFO: INFO: Collecting logs for node kcp-upgrade-k4y06a-control-plane-f54hd in cluster kcp-upgrade-k4y06a in namespace kcp-upgrade-voktlf

Oct 14 00:51:15.242: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-k4y06a-control-plane-f54hd

Failed to get logs for machine kcp-upgrade-k4y06a-control-plane-v8k8n, cluster kcp-upgrade-voktlf/kcp-upgrade-k4y06a: dialing public load balancer at kcp-upgrade-k4y06a-2992af1f.uksouth.cloudapp.azure.com: dial tcp 51.132.211.59:22: connect: connection timed out
Oct 14 00:51:16.649: INFO: INFO: Collecting logs for node kcp-upgrade-k4y06a-md-0-26skm in cluster kcp-upgrade-k4y06a in namespace kcp-upgrade-voktlf

Oct 14 00:53:26.317: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-k4y06a-md-0-26skm

Failed to get logs for machine kcp-upgrade-k4y06a-md-0-df4f7cdcd-ndh5f, cluster kcp-upgrade-voktlf/kcp-upgrade-k4y06a: dialing public load balancer at kcp-upgrade-k4y06a-2992af1f.uksouth.cloudapp.azure.com: dial tcp 51.132.211.59:22: connect: connection timed out
STEP: Dumping workload cluster kcp-upgrade-voktlf/kcp-upgrade-k4y06a kube-system pod logs
STEP: Fetching kube-system pod logs took 981.326418ms
STEP: Dumping workload cluster kcp-upgrade-voktlf/kcp-upgrade-k4y06a Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-t2px8, container calico-node
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-k4y06a-control-plane-f54hd, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-wmghx, container kube-proxy
... skipping 8 lines ...
STEP: Fetching activity logs took 1.057655157s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-voktlf" namespace
STEP: Deleting cluster kcp-upgrade-voktlf/kcp-upgrade-k4y06a
STEP: Deleting cluster kcp-upgrade-k4y06a
INFO: Waiting for the Cluster kcp-upgrade-voktlf/kcp-upgrade-k4y06a to be deleted
STEP: Waiting for cluster kcp-upgrade-k4y06a to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-k4y06a-control-plane-f54hd, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-k4y06a-control-plane-f54hd, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-27sxt, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-8gfz4, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-k4y06a-control-plane-f54hd, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-t2px8, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-wmghx, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-467x5, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-k4y06a-control-plane-f54hd, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-ppk99, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-lqnbd, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-voktlf
STEP: Redacting sensitive information from logs


• [SLOW TEST:1348.410 seconds]
... skipping 53 lines ...
Oct 14 01:05:55.287: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-nc1a30-control-plane-mgzsr

Oct 14 01:05:56.948: INFO: INFO: Collecting logs for node md-rollout-nc1a30-md-0-8ls7x in cluster md-rollout-nc1a30 in namespace md-rollout-ij3o13

Oct 14 01:08:10.370: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-nc1a30-md-0-8ls7x

Failed to get logs for machine md-rollout-nc1a30-md-0-57f8f59c9-vxq5x, cluster md-rollout-ij3o13/md-rollout-nc1a30: [dialing from control plane to target node at md-rollout-nc1a30-md-0-8ls7x: ssh: rejected: connect failed (Connection timed out), failed to get boot diagnostics data: compute.VirtualMachinesClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ResourceNotFound" Message="The Resource 'Microsoft.Compute/virtualMachines/md-rollout-nc1a30-md-0-8ls7x' under resource group 'capz-e2e-32o7ra' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"]
Oct 14 01:08:11.321: INFO: INFO: Collecting logs for node md-rollout-nc1a30-md-0-kahjp0-skcgr in cluster md-rollout-nc1a30 in namespace md-rollout-ij3o13

Oct 14 01:08:21.699: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-nc1a30-md-0-kahjp0-skcgr

STEP: Dumping workload cluster md-rollout-ij3o13/md-rollout-nc1a30 kube-system pod logs
STEP: Fetching kube-system pod logs took 978.834567ms
... skipping 12 lines ...
STEP: Fetching activity logs took 580.480673ms
STEP: Dumping all the Cluster API resources in the "md-rollout-ij3o13" namespace
STEP: Deleting cluster md-rollout-ij3o13/md-rollout-nc1a30
STEP: Deleting cluster md-rollout-nc1a30
INFO: Waiting for the Cluster md-rollout-ij3o13/md-rollout-nc1a30 to be deleted
STEP: Waiting for cluster md-rollout-nc1a30 to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-md-rollout-nc1a30-control-plane-mgzsr, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-8zbhf, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-rollout-nc1a30-control-plane-mgzsr, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-46n5b, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-rollout-nc1a30-control-plane-mgzsr, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-d24nk, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-mn7br, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-md-rollout-nc1a30-control-plane-mgzsr, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-tgdzm, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-rollout" test spec
INFO: Deleting namespace md-rollout-ij3o13
STEP: Redacting sensitive information from logs


• [SLOW TEST:1018.410 seconds]
... skipping 54 lines ...
STEP: Dumping logs from the "kcp-upgrade-9qoy4n" workload cluster
STEP: Dumping workload cluster kcp-upgrade-o62evc/kcp-upgrade-9qoy4n logs
Oct 14 01:03:16.728: INFO: INFO: Collecting logs for node kcp-upgrade-9qoy4n-control-plane-ljmw7 in cluster kcp-upgrade-9qoy4n in namespace kcp-upgrade-o62evc

Oct 14 01:05:27.213: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-9qoy4n-control-plane-ljmw7

Failed to get logs for machine kcp-upgrade-9qoy4n-control-plane-9qxh5, cluster kcp-upgrade-o62evc/kcp-upgrade-9qoy4n: dialing public load balancer at kcp-upgrade-9qoy4n-42d0fcab.uksouth.cloudapp.azure.com: dial tcp 51.132.211.38:22: connect: connection timed out
Oct 14 01:05:28.541: INFO: INFO: Collecting logs for node kcp-upgrade-9qoy4n-control-plane-7z28b in cluster kcp-upgrade-9qoy4n in namespace kcp-upgrade-o62evc

Oct 14 01:07:38.281: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-9qoy4n-control-plane-7z28b

Failed to get logs for machine kcp-upgrade-9qoy4n-control-plane-t9p4s, cluster kcp-upgrade-o62evc/kcp-upgrade-9qoy4n: dialing public load balancer at kcp-upgrade-9qoy4n-42d0fcab.uksouth.cloudapp.azure.com: dial tcp 51.132.211.38:22: connect: connection timed out
Oct 14 01:07:39.483: INFO: INFO: Collecting logs for node kcp-upgrade-9qoy4n-control-plane-7pdwm in cluster kcp-upgrade-9qoy4n in namespace kcp-upgrade-o62evc

Oct 14 01:09:49.353: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-9qoy4n-control-plane-7pdwm

Failed to get logs for machine kcp-upgrade-9qoy4n-control-plane-wsnpg, cluster kcp-upgrade-o62evc/kcp-upgrade-9qoy4n: dialing public load balancer at kcp-upgrade-9qoy4n-42d0fcab.uksouth.cloudapp.azure.com: dial tcp 51.132.211.38:22: connect: connection timed out
Oct 14 01:09:51.786: INFO: INFO: Collecting logs for node kcp-upgrade-9qoy4n-md-0-pz5kk in cluster kcp-upgrade-9qoy4n in namespace kcp-upgrade-o62evc

Oct 14 01:12:02.477: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-9qoy4n-md-0-pz5kk

Failed to get logs for machine kcp-upgrade-9qoy4n-md-0-6dc659488f-z8dfx, cluster kcp-upgrade-o62evc/kcp-upgrade-9qoy4n: dialing public load balancer at kcp-upgrade-9qoy4n-42d0fcab.uksouth.cloudapp.azure.com: dial tcp 51.132.211.38:22: connect: connection timed out
STEP: Dumping workload cluster kcp-upgrade-o62evc/kcp-upgrade-9qoy4n kube-system pod logs
STEP: Fetching kube-system pod logs took 887.327357ms
STEP: Dumping workload cluster kcp-upgrade-o62evc/kcp-upgrade-9qoy4n Azure activity log
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-mts5d, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-9qoy4n-control-plane-7pdwm, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-h82fs, container kube-proxy
... skipping 14 lines ...
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-9qoy4n-control-plane-7z28b, container etcd
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-9qoy4n-control-plane-ljmw7, container etcd
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-9qoy4n-control-plane-7z28b, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-hfcst, container coredns
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-9qoy4n-control-plane-7pdwm, container etcd
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-9qoy4n-control-plane-ljmw7, container kube-controller-manager
STEP: Got error while iterating over activity logs for resource group capz-e2e-star73: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000271012s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-o62evc" namespace
STEP: Deleting cluster kcp-upgrade-o62evc/kcp-upgrade-9qoy4n
STEP: Deleting cluster kcp-upgrade-9qoy4n
INFO: Waiting for the Cluster kcp-upgrade-o62evc/kcp-upgrade-9qoy4n to be deleted
STEP: Waiting for cluster kcp-upgrade-9qoy4n to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-9qoy4n-control-plane-7z28b, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-995zh, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-9qoy4n-control-plane-ljmw7, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-9qoy4n-control-plane-7z28b, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-9qoy4n-control-plane-7z28b, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-h82fs, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-9qoy4n-control-plane-7z28b, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-ttvz4, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-2v4hf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-4cbpz, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-9qoy4n-control-plane-ljmw7, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-9qoy4n-control-plane-ljmw7, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-9qoy4n-control-plane-ljmw7, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-mts5d, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-hfcst, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-dktpv, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-o62evc
STEP: Redacting sensitive information from logs


• [SLOW TEST:2569.289 seconds]
... skipping 91 lines ...
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-m31wre-control-plane-pfkwd, container kube-scheduler
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-m31wre-control-plane-zv5l5, container etcd
STEP: Creating log watcher for controller kube-system/calico-node-mlw6t, container calico-node
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-m31wre-control-plane-zv5l5, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-m31wre-control-plane-pfkwd, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-node-nncnl, container calico-node
STEP: Got error while iterating over activity logs for resource group capz-e2e-51w2s3: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.00020263s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-m462pv" namespace
STEP: Deleting cluster kcp-upgrade-m462pv/kcp-upgrade-m31wre
STEP: Deleting cluster kcp-upgrade-m31wre
INFO: Waiting for the Cluster kcp-upgrade-m462pv/kcp-upgrade-m31wre to be deleted
STEP: Waiting for cluster kcp-upgrade-m31wre to be deleted
... skipping 13 lines ...
Running the Cluster API E2E tests Running the self-hosted spec 
  Should pivot the bootstrap cluster to a self-hosted cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:103

STEP: Creating namespace "self-hosted" for hosting the cluster
Oct 14 01:16:16.031: INFO: starting to create namespace for hosting the "self-hosted" test spec
2021/10/14 01:16:16 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found
INFO: Creating namespace self-hosted
INFO: Creating event watcher for namespace "self-hosted"
STEP: Creating a workload cluster
INFO: Creating the workload cluster with name "self-hosted-xkmxi7" using the "(default)" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
INFO: clusterctl config cluster self-hosted-xkmxi7 --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 1 --worker-machine-count 1 --flavor (default)
... skipping 74 lines ...
STEP: Fetching activity logs took 579.344465ms
STEP: Dumping all the Cluster API resources in the "self-hosted" namespace
STEP: Deleting all clusters in the self-hosted namespace
STEP: Deleting cluster self-hosted-xkmxi7
INFO: Waiting for the Cluster self-hosted/self-hosted-xkmxi7 to be deleted
STEP: Waiting for cluster self-hosted-xkmxi7 to be deleted
INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-6bdc78c4d4-rs7l7, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-98224, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-self-hosted-xkmxi7-control-plane-m945k, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-self-hosted-xkmxi7-control-plane-m945k, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-9xnrc, container calico-node: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-86b5f554dd-s5h5b, container manager: http2: client connection lost
INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-598cb9775c-kg27m, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-r7gm6, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-self-hosted-xkmxi7-control-plane-m945k, container kube-controller-manager: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-865c969d7-d8t5z, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-nt7js, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-pxq7s, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-self-hosted-xkmxi7-control-plane-m945k, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-qxnjp, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-tm8lx, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "self-hosted" test spec
INFO: Deleting namespace self-hosted
STEP: Checking if any resources are left over in Azure for spec "self-hosted"
STEP: Redacting sensitive information from logs
STEP: Redacting sensitive information from logs

... skipping 266 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:40
  Should successfully remediate unhealthy machines with MachineHealthCheck
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:169
    Should successfully trigger KCP remediation [It]
    /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/e2e/mhc_remediations.go:115

    Failed to get controller-runtime client
    Unexpected error:
        <*url.Error | 0xc000d61f80>: {
            Op: "Get",
            URL: "https://mhc-remediation-24c82p-1f6ca03b.uksouth.cloudapp.azure.com:6443/api?timeout=32s",
            Err: <*net.OpError | 0xc000d52fa0>{
                Op: "dial",
                Net: "tcp",
                Source: nil,
... skipping 131 lines ...
STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-rsj2d, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-8z2v5, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-t4bj2, container coredns
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-qv6zb, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-proxy-tw2jr, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-apiserver-machine-pool-ilmdpu-control-plane-rcgkw, container kube-apiserver
STEP: Error starting logs stream for pod kube-system/calico-node-6s4l4, container calico-node: pods "machine-pool-ilmdpu-mp-0000001" not found
STEP: Error starting logs stream for pod kube-system/kube-proxy-tw2jr, container kube-proxy: pods "machine-pool-ilmdpu-mp-0000001" not found
STEP: Error starting logs stream for pod kube-system/calico-node-h2dr9, container calico-node: pods "machine-pool-ilmdpu-mp-0000000" not found
STEP: Error starting logs stream for pod kube-system/kube-proxy-2rwzh, container kube-proxy: pods "machine-pool-ilmdpu-mp-0000000" not found
STEP: Fetching activity logs took 541.750043ms
STEP: Dumping all the Cluster API resources in the "machine-pool-i90853" namespace
STEP: Deleting cluster machine-pool-i90853/machine-pool-ilmdpu
STEP: Deleting cluster machine-pool-ilmdpu
INFO: Waiting for the Cluster machine-pool-i90853/machine-pool-ilmdpu to be deleted
STEP: Waiting for cluster machine-pool-ilmdpu to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-rsj2d, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-gc98p, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-svz8d, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-qv6zb, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-machine-pool-ilmdpu-control-plane-rcgkw, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-machine-pool-ilmdpu-control-plane-rcgkw, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-8z2v5, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-5r9cz, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-machine-pool-ilmdpu-control-plane-rcgkw, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-t4bj2, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-machine-pool-ilmdpu-control-plane-rcgkw, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "machine-pool" test spec
INFO: Deleting namespace machine-pool-i90853
STEP: Redacting sensitive information from logs


• [SLOW TEST:1197.314 seconds]
... skipping 143 lines ...
STEP: Dumping logs from the "node-drain-bywf4n" workload cluster
STEP: Dumping workload cluster node-drain-q1yncj/node-drain-bywf4n logs
Oct 14 02:21:12.429: INFO: INFO: Collecting logs for node node-drain-bywf4n-control-plane-zdvxx in cluster node-drain-bywf4n in namespace node-drain-q1yncj

Oct 14 02:23:22.797: INFO: INFO: Collecting boot logs for AzureMachine node-drain-bywf4n-control-plane-zdvxx

Failed to get logs for machine node-drain-bywf4n-control-plane-dmds5, cluster node-drain-q1yncj/node-drain-bywf4n: dialing public load balancer at node-drain-bywf4n-2ec726c0.uksouth.cloudapp.azure.com: dial tcp 20.90.226.80:22: connect: connection timed out
STEP: Dumping workload cluster node-drain-q1yncj/node-drain-bywf4n kube-system pod logs
STEP: Fetching kube-system pod logs took 931.692047ms
STEP: Dumping workload cluster node-drain-q1yncj/node-drain-bywf4n Azure activity log
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-5s6fq, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-apiserver-node-drain-bywf4n-control-plane-zdvxx, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-controller-manager-node-drain-bywf4n-control-plane-zdvxx, container kube-controller-manager
... skipping 6 lines ...
STEP: Fetching activity logs took 1.263936866s
STEP: Dumping all the Cluster API resources in the "node-drain-q1yncj" namespace
STEP: Deleting cluster node-drain-q1yncj/node-drain-bywf4n
STEP: Deleting cluster node-drain-bywf4n
INFO: Waiting for the Cluster node-drain-q1yncj/node-drain-bywf4n to be deleted
STEP: Waiting for cluster node-drain-bywf4n to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-node-drain-bywf4n-control-plane-zdvxx, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-node-drain-bywf4n-control-plane-zdvxx, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-lgxng, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-node-drain-bywf4n-control-plane-zdvxx, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-5s6fq, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-node-drain-bywf4n-control-plane-zdvxx, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-l2fh7, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-tsq7q, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-j52br, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "node-drain" test spec
INFO: Deleting namespace node-drain-q1yncj
STEP: Redacting sensitive information from logs


• [SLOW TEST:1976.762 seconds]
... skipping 7 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck [It] Should successfully trigger KCP remediation 
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/framework/cluster_proxy.go:171

Ran 12 of 22 Specs in 6899.407 seconds
FAIL! -- 11 Passed | 1 Failed | 0 Pending | 10 Skipped


Ginkgo ran 1 suite in 1h56m35.174595076s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
  - For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta
  - To comment, chime in at https://github.com/onsi/ginkgo/issues/711

To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...