This job view page is being replaced by Spyglass soon. Check out the new job view.
PRCecileRobertMichon: Update CAPI to v0.4.3
ResultFAILURE
Tests 1 failed / 11 succeeded
Started2021-09-27 17:53
Elapsed1h55m
Revision76853c9a14e099ae7712a9902abe6dd3c77c289d
Refs 1728

Test Failures


capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation 22m42s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sRunning\sthe\sCluster\sAPI\sE2E\stests\sShould\ssuccessfully\sremediate\sunhealthy\smachines\swith\sMachineHealthCheck\sShould\ssuccessfully\strigger\sKCP\sremediation$'
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.3/e2e/mhc_remediations.go:115
Failed to get controller-runtime client
Unexpected error:
    <*url.Error | 0xc0010bef00>: {
        Op: "Get",
        URL: "https://mhc-remediation-u7zzxd-f8242c64.uksouth.cloudapp.azure.com:6443/api?timeout=32s",
        Err: <*http.httpError | 0xc00014c2a0>{
            err: "net/http: request canceled (Client.Timeout exceeded while awaiting headers)",
            timeout: true,
        },
    }
    Get "https://mhc-remediation-u7zzxd-f8242c64.uksouth.cloudapp.azure.com:6443/api?timeout=32s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
occurred
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.3/framework/cluster_proxy.go:171
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 11 Passed Tests

Show 10 Skipped Tests

Error lines from build-log.txt

... skipping 492 lines ...
STEP: Fetching activity logs took 512.222857ms
STEP: Dumping all the Cluster API resources in the "quick-start-86m32z" namespace
STEP: Deleting cluster quick-start-86m32z/quick-start-38od9u
STEP: Deleting cluster quick-start-38od9u
INFO: Waiting for the Cluster quick-start-86m32z/quick-start-38od9u to be deleted
STEP: Waiting for cluster quick-start-38od9u to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-q7nzl, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-quick-start-38od9u-control-plane-qptnp, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-844lx, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-quick-start-38od9u-control-plane-qptnp, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-lr5wh, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-dgcph, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-quick-start-38od9u-control-plane-qptnp, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-2pnmc, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-7tfzw, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-quick-start-38od9u-control-plane-qptnp, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-56t4x, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "quick-start" test spec
INFO: Deleting namespace quick-start-86m32z
STEP: Redacting sensitive information from logs


• [SLOW TEST:661.373 seconds]
... skipping 52 lines ...
STEP: Dumping logs from the "kcp-upgrade-tu0x42" workload cluster
STEP: Dumping workload cluster kcp-upgrade-57eroj/kcp-upgrade-tu0x42 logs
Sep 27 18:14:22.089: INFO: INFO: Collecting logs for node kcp-upgrade-tu0x42-control-plane-9m2x6 in cluster kcp-upgrade-tu0x42 in namespace kcp-upgrade-57eroj

Sep 27 18:16:32.268: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-tu0x42-control-plane-9m2x6

Failed to get logs for machine kcp-upgrade-tu0x42-control-plane-ttnfk, cluster kcp-upgrade-57eroj/kcp-upgrade-tu0x42: dialing public load balancer at kcp-upgrade-tu0x42-1bcf3474.uksouth.cloudapp.azure.com: dial tcp 20.108.18.198:22: connect: connection timed out
Sep 27 18:16:33.639: INFO: INFO: Collecting logs for node kcp-upgrade-tu0x42-md-0-ph2dx in cluster kcp-upgrade-tu0x42 in namespace kcp-upgrade-57eroj

Sep 27 18:18:43.344: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-tu0x42-md-0-ph2dx

Failed to get logs for machine kcp-upgrade-tu0x42-md-0-5cb894b684-48k9w, cluster kcp-upgrade-57eroj/kcp-upgrade-tu0x42: dialing public load balancer at kcp-upgrade-tu0x42-1bcf3474.uksouth.cloudapp.azure.com: dial tcp 20.108.18.198:22: connect: connection timed out
STEP: Dumping workload cluster kcp-upgrade-57eroj/kcp-upgrade-tu0x42 kube-system pod logs
STEP: Fetching kube-system pod logs took 940.217023ms
STEP: Dumping workload cluster kcp-upgrade-57eroj/kcp-upgrade-tu0x42 Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-s6dms, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-pvmzc, container calico-node
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-tu0x42-control-plane-9m2x6, container kube-apiserver
... skipping 8 lines ...
STEP: Fetching activity logs took 1.012038252s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-57eroj" namespace
STEP: Deleting cluster kcp-upgrade-57eroj/kcp-upgrade-tu0x42
STEP: Deleting cluster kcp-upgrade-tu0x42
INFO: Waiting for the Cluster kcp-upgrade-57eroj/kcp-upgrade-tu0x42 to be deleted
STEP: Waiting for cluster kcp-upgrade-tu0x42 to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-s6dms, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-tu0x42-control-plane-9m2x6, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-spsqr, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-tu0x42-control-plane-9m2x6, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-tu0x42-control-plane-9m2x6, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-kz7d4, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-tu0x42-control-plane-9m2x6, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-gf2wj, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-xt9p5, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-pvmzc, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-2bxmj, container calico-kube-controllers: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-57eroj
STEP: Redacting sensitive information from logs


• [SLOW TEST:1556.438 seconds]
... skipping 57 lines ...
Sep 27 18:33:30.259: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-ycuaoc-md-0-swh58q-n24x4

Sep 27 18:33:30.831: INFO: INFO: Collecting logs for node md-rollout-ycuaoc-md-0-ljtmm in cluster md-rollout-ycuaoc in namespace md-rollout-ghyii4

Sep 27 18:35:46.109: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-ycuaoc-md-0-ljtmm

Failed to get logs for machine md-rollout-ycuaoc-md-0-6f44db8949-wjv79, cluster md-rollout-ghyii4/md-rollout-ycuaoc: [dialing from control plane to target node at md-rollout-ycuaoc-md-0-ljtmm: ssh: rejected: connect failed (Connection timed out), failed to get boot diagnostics data: compute.VirtualMachinesClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ResourceNotFound" Message="The Resource 'Microsoft.Compute/virtualMachines/md-rollout-ycuaoc-md-0-ljtmm' under resource group 'capz-e2e-v69hud' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"]
STEP: Dumping workload cluster md-rollout-ghyii4/md-rollout-ycuaoc kube-system pod logs
STEP: Fetching kube-system pod logs took 994.176938ms
STEP: Dumping workload cluster md-rollout-ghyii4/md-rollout-ycuaoc Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-tkvxt, container coredns
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-grd68, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/etcd-md-rollout-ycuaoc-control-plane-mdlnf, container etcd
... skipping 8 lines ...
STEP: Fetching activity logs took 1.266658748s
STEP: Dumping all the Cluster API resources in the "md-rollout-ghyii4" namespace
STEP: Deleting cluster md-rollout-ghyii4/md-rollout-ycuaoc
STEP: Deleting cluster md-rollout-ycuaoc
INFO: Waiting for the Cluster md-rollout-ghyii4/md-rollout-ycuaoc to be deleted
STEP: Waiting for cluster md-rollout-ycuaoc to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-md-rollout-ycuaoc-control-plane-mdlnf, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-tkvxt, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-rollout-ycuaoc-control-plane-mdlnf, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-rollout" test spec
INFO: Deleting namespace md-rollout-ghyii4
STEP: Redacting sensitive information from logs


• [SLOW TEST:1048.794 seconds]
... skipping 92 lines ...
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-jec8ql-control-plane-f9glq, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-pv5n7, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-jgnwf, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-w74cj, container calico-node
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-jec8ql-control-plane-hw7j8, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-jec8ql-control-plane-f9glq, container kube-scheduler
STEP: Got error while iterating over activity logs for resource group capz-e2e-xnnsmb: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001187464s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-qounqf" namespace
STEP: Deleting cluster kcp-upgrade-qounqf/kcp-upgrade-jec8ql
STEP: Deleting cluster kcp-upgrade-jec8ql
INFO: Waiting for the Cluster kcp-upgrade-qounqf/kcp-upgrade-jec8ql to be deleted
STEP: Waiting for cluster kcp-upgrade-jec8ql to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-5h7d2, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-jec8ql-control-plane-m8nqm, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-jec8ql-control-plane-f9glq, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-jec8ql-control-plane-m8nqm, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-w74cj, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-jec8ql-control-plane-hw7j8, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-jec8ql-control-plane-f9glq, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-jec8ql-control-plane-f9glq, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-jgnwf, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-jec8ql-control-plane-hw7j8, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-pv5n7, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-jknj6, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-gtbp5, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-jec8ql-control-plane-hw7j8, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-mgkr5, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-6zkkq, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-pdxr5, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-jec8ql-control-plane-m8nqm, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-jec8ql-control-plane-m8nqm, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-xz4l8, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-jec8ql-control-plane-f9glq, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-mvrjn, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-jec8ql-control-plane-hw7j8, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-qounqf
STEP: Redacting sensitive information from logs


• [SLOW TEST:2703.587 seconds]
... skipping 91 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-8z56q, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-9zkxt, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-rutnsa-control-plane-ptfjt, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-rutnsa-control-plane-hfd2g, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-rutnsa-control-plane-qj5cb, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-node-88rj6, container calico-node
STEP: Got error while iterating over activity logs for resource group capz-e2e-xha1qd: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001006131s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-x4nfdn" namespace
STEP: Deleting cluster kcp-upgrade-x4nfdn/kcp-upgrade-rutnsa
STEP: Deleting cluster kcp-upgrade-rutnsa
INFO: Waiting for the Cluster kcp-upgrade-x4nfdn/kcp-upgrade-rutnsa to be deleted
STEP: Waiting for cluster kcp-upgrade-rutnsa to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-7hdrx, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-48x95, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-dmqzn, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-rutnsa-control-plane-hfd2g, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-rutnsa-control-plane-hfd2g, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-rutnsa-control-plane-hfd2g, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-rutnsa-control-plane-hfd2g, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-x4nfdn
STEP: Redacting sensitive information from logs


• [SLOW TEST:2529.036 seconds]
... skipping 7 lines ...
Running the Cluster API E2E tests Running the self-hosted spec 
  Should pivot the bootstrap cluster to a self-hosted cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:103

STEP: Creating namespace "self-hosted" for hosting the cluster
Sep 27 18:44:02.876: INFO: starting to create namespace for hosting the "self-hosted" test spec
2021/09/27 18:44:02 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found
INFO: Creating namespace self-hosted
INFO: Creating event watcher for namespace "self-hosted"
STEP: Creating a workload cluster
INFO: Creating the workload cluster with name "self-hosted-v2nznb" using the "(default)" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
INFO: clusterctl config cluster self-hosted-v2nznb --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 1 --worker-machine-count 1 --flavor (default)
... skipping 75 lines ...
STEP: Fetching activity logs took 1.965133878s
STEP: Dumping all the Cluster API resources in the "self-hosted" namespace
STEP: Deleting all clusters in the self-hosted namespace
STEP: Deleting cluster self-hosted-v2nznb
INFO: Waiting for the Cluster self-hosted/self-hosted-v2nznb to be deleted
STEP: Waiting for cluster self-hosted-v2nznb to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-self-hosted-v2nznb-control-plane-frmh2, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-6cmcr, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-95v4x, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-self-hosted-v2nznb-control-plane-frmh2, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-58hs2, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-ttw8j, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-self-hosted-v2nznb-control-plane-frmh2, container kube-scheduler: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-bfcd78f99-24shz, container manager: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-54f94494bd-9trpn, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-self-hosted-v2nznb-control-plane-frmh2, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5p2vk, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-jk8bx, container calico-node: http2: client connection lost
INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-665dbcd54b-shfgs, container manager: http2: client connection lost
INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-665dbcd54b-shfgs, container kube-rbac-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-x96c6, container coredns: http2: client connection lost
INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-66b74b44bd-w2zzn, container manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "self-hosted" test spec
INFO: Deleting namespace self-hosted
STEP: Checking if any resources are left over in Azure for spec "self-hosted"
STEP: Redacting sensitive information from logs
STEP: Redacting sensitive information from logs

... skipping 76 lines ...
STEP: Fetching activity logs took 597.547926ms
STEP: Dumping all the Cluster API resources in the "mhc-remediation-vd9c9m" namespace
STEP: Deleting cluster mhc-remediation-vd9c9m/mhc-remediation-ajlsqv
STEP: Deleting cluster mhc-remediation-ajlsqv
INFO: Waiting for the Cluster mhc-remediation-vd9c9m/mhc-remediation-ajlsqv to be deleted
STEP: Waiting for cluster mhc-remediation-ajlsqv to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-fsfh7, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-7pqk9, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-ajlsqv-control-plane-8v5w2, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-mtb52, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-ajlsqv-control-plane-8v5w2, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-ajlsqv-control-plane-8v5w2, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-n74x8, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-c2cqr, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-ajlsqv-control-plane-8v5w2, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-4499m, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-p7ccs, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
INFO: Deleting namespace mhc-remediation-vd9c9m
STEP: Redacting sensitive information from logs


• [SLOW TEST:1253.828 seconds]
... skipping 51 lines ...
STEP: Dumping logs from the "mhc-remediation-u7zzxd" workload cluster
STEP: Dumping workload cluster mhc-remediation-vwc1ps/mhc-remediation-u7zzxd logs
Sep 27 19:07:25.606: INFO: INFO: Collecting logs for node mhc-remediation-u7zzxd-control-plane-29l6d in cluster mhc-remediation-u7zzxd in namespace mhc-remediation-vwc1ps

Sep 27 19:07:33.116: INFO: INFO: Collecting boot logs for AzureMachine mhc-remediation-u7zzxd-control-plane-29l6d

Failed to get logs for machine mhc-remediation-u7zzxd-control-plane-j66tw, cluster mhc-remediation-vwc1ps/mhc-remediation-u7zzxd: [dialing from control plane to target node at mhc-remediation-u7zzxd-control-plane-29l6d: ssh: rejected: connect failed (Temporary failure in name resolution), failed to get boot diagnostics data: compute.VirtualMachinesClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ResourceNotFound" Message="The Resource 'Microsoft.Compute/virtualMachines/mhc-remediation-u7zzxd-control-plane-29l6d' under resource group 'capz-e2e-qeiqay' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"]
Sep 27 19:07:34.054: INFO: INFO: Collecting logs for node mhc-remediation-u7zzxd-control-plane-9wpkh in cluster mhc-remediation-u7zzxd in namespace mhc-remediation-vwc1ps

Sep 27 19:07:47.926: INFO: INFO: Collecting boot logs for AzureMachine mhc-remediation-u7zzxd-control-plane-9wpkh

Sep 27 19:07:49.099: INFO: INFO: Collecting logs for node mhc-remediation-u7zzxd-control-plane-vc8bm in cluster mhc-remediation-u7zzxd in namespace mhc-remediation-vwc1ps

... skipping 26 lines ...
STEP: Fetching activity logs took 620.102828ms
STEP: Dumping all the Cluster API resources in the "mhc-remediation-vwc1ps" namespace
STEP: Deleting cluster mhc-remediation-vwc1ps/mhc-remediation-u7zzxd
STEP: Deleting cluster mhc-remediation-u7zzxd
INFO: Waiting for the Cluster mhc-remediation-vwc1ps/mhc-remediation-u7zzxd to be deleted
STEP: Waiting for cluster mhc-remediation-u7zzxd to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-rgf7r, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-u7zzxd-control-plane-9wpkh, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-snfzc, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-u7zzxd-control-plane-9wpkh, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rwpmq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-gvjfk, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-xzwf8, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-66lw4, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-u7zzxd-control-plane-9wpkh, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-thmzw, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-u7zzxd-control-plane-9wpkh, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-u7zzxd-control-plane-vc8bm, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-u7zzxd-control-plane-vc8bm, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-u7zzxd-control-plane-vc8bm, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-7fzkq, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-bvd8x, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-u7zzxd-control-plane-vc8bm, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
INFO: Deleting namespace mhc-remediation-vwc1ps
STEP: Redacting sensitive information from logs


• Failure [1362.480 seconds]
Running the Cluster API E2E tests
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:40
  Should successfully remediate unhealthy machines with MachineHealthCheck
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:169
    Should successfully trigger KCP remediation [It]
    /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.3/e2e/mhc_remediations.go:115

    Failed to get controller-runtime client
    Unexpected error:
        <*url.Error | 0xc0010bef00>: {
            Op: "Get",
            URL: "https://mhc-remediation-u7zzxd-f8242c64.uksouth.cloudapp.azure.com:6443/api?timeout=32s",
            Err: <*http.httpError | 0xc00014c2a0>{
                err: "net/http: request canceled (Client.Timeout exceeded while awaiting headers)",
                timeout: true,
            },
... skipping 191 lines ...
STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-2b8n6, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-5hjdf, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-xfq96, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-b9shr, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-gk2pd, container coredns
STEP: Creating log watcher for controller kube-system/etcd-machine-pool-1f2svt-control-plane-l4t7m, container etcd
STEP: Error starting logs stream for pod kube-system/kube-proxy-nwcqp, container kube-proxy: pods "machine-pool-1f2svt-mp-0000001" not found
STEP: Error starting logs stream for pod kube-system/kube-proxy-9prkw, container kube-proxy: pods "machine-pool-1f2svt-mp-0000000" not found
STEP: Error starting logs stream for pod kube-system/calico-node-5hjdf, container calico-node: pods "machine-pool-1f2svt-mp-0000000" not found
STEP: Error starting logs stream for pod kube-system/calico-node-xfq96, container calico-node: pods "machine-pool-1f2svt-mp-0000001" not found
STEP: Fetching activity logs took 668.475839ms
STEP: Dumping all the Cluster API resources in the "machine-pool-z1f0dh" namespace
STEP: Deleting cluster machine-pool-z1f0dh/machine-pool-1f2svt
STEP: Deleting cluster machine-pool-1f2svt
INFO: Waiting for the Cluster machine-pool-z1f0dh/machine-pool-1f2svt to be deleted
STEP: Waiting for cluster machine-pool-1f2svt to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-b9shr, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-machine-pool-1f2svt-control-plane-l4t7m, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-machine-pool-1f2svt-control-plane-l4t7m, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rcrmd, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-machine-pool-1f2svt-control-plane-l4t7m, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-gk2pd, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-tfqxj, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-2b8n6, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-dkw7f, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-4bf9r, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-machine-pool-1f2svt-control-plane-l4t7m, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "machine-pool" test spec
INFO: Deleting namespace machine-pool-z1f0dh
STEP: Redacting sensitive information from logs


• [SLOW TEST:1113.283 seconds]
... skipping 75 lines ...
STEP: Fetching activity logs took 572.436347ms
STEP: Dumping all the Cluster API resources in the "md-scale-cl2tdp" namespace
STEP: Deleting cluster md-scale-cl2tdp/md-scale-xqqjqh
STEP: Deleting cluster md-scale-xqqjqh
INFO: Waiting for the Cluster md-scale-cl2tdp/md-scale-xqqjqh to be deleted
STEP: Waiting for cluster md-scale-xqqjqh to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-zfw6s, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-md-scale-xqqjqh-control-plane-pn2v2, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-dxq8h, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-scale-xqqjqh-control-plane-pn2v2, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-md-scale-xqqjqh-control-plane-pn2v2, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-mj9b7, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-d695h, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-scale-xqqjqh-control-plane-pn2v2, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-shv89, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-scale" test spec
INFO: Deleting namespace md-scale-cl2tdp
STEP: Redacting sensitive information from logs


• [SLOW TEST:1355.672 seconds]
... skipping 74 lines ...
STEP: Fetching activity logs took 1.029250714s
STEP: Dumping all the Cluster API resources in the "node-drain-1b92pe" namespace
STEP: Deleting cluster node-drain-1b92pe/node-drain-py8xr5
STEP: Deleting cluster node-drain-py8xr5
INFO: Waiting for the Cluster node-drain-1b92pe/node-drain-py8xr5 to be deleted
STEP: Waiting for cluster node-drain-py8xr5 to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-node-drain-py8xr5-control-plane-k5qnj, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-node-drain-py8xr5-control-plane-k5qnj, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-zcsv4, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-l6vsv, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-node-drain-py8xr5-control-plane-k5qnj, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-nfclf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-q5lz6, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-node-drain-py8xr5-control-plane-k5qnj, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-twhdt, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "node-drain" test spec
INFO: Deleting namespace node-drain-1b92pe
STEP: Redacting sensitive information from logs


• [SLOW TEST:1840.754 seconds]
... skipping 7 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck [It] Should successfully trigger KCP remediation 
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.3/framework/cluster_proxy.go:171

Ran 12 of 22 Specs in 6552.505 seconds
FAIL! -- 11 Passed | 1 Failed | 0 Pending | 10 Skipped


Ginkgo ran 1 suite in 1h50m36.385390571s
Test Suite Failed
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...