This job view page is being replaced by Spyglass soon. Check out the new job view.
PRshysank: v1beta1 cluster upgrade tests (using clusterctl upgrade)
ResultFAILURE
Tests 1 failed / 12 succeeded
Started2021-10-14 05:14
Elapsed1h57m
Revision9483fab64f9c3c3f2d778ba619babbd1b5c50d14
Refs 1771

Test Failures


capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation 24m34s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sRunning\sthe\sCluster\sAPI\sE2E\stests\sShould\ssuccessfully\sremediate\sunhealthy\smachines\swith\sMachineHealthCheck\sShould\ssuccessfully\strigger\sKCP\sremediation$'
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/e2e/mhc_remediations.go:115
Timed out after 1200.001s.
Expected
    <int>: 1
to equal
    <int>: 3
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/framework/controlplane_helpers.go:108
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 12 Passed Tests

Show 10 Skipped Tests

Error lines from build-log.txt

... skipping 489 lines ...
STEP: Fetching activity logs took 702.51616ms
STEP: Dumping all the Cluster API resources in the "quick-start-g8qxww" namespace
STEP: Deleting cluster quick-start-g8qxww/quick-start-7wplys
STEP: Deleting cluster quick-start-7wplys
INFO: Waiting for the Cluster quick-start-g8qxww/quick-start-7wplys to be deleted
STEP: Waiting for cluster quick-start-7wplys to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-ll8jg, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-nscc4, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-dld9q, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-7fh9t, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-quick-start-7wplys-control-plane-rdxt2, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-quick-start-7wplys-control-plane-rdxt2, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-7pmx5, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-fbrhk, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-quick-start-7wplys-control-plane-rdxt2, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-k4jzf, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-quick-start-7wplys-control-plane-rdxt2, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "quick-start" test spec
INFO: Deleting namespace quick-start-g8qxww
STEP: Redacting sensitive information from logs


• [SLOW TEST:727.707 seconds]
... skipping 52 lines ...
STEP: Dumping logs from the "kcp-upgrade-ozk5bk" workload cluster
STEP: Dumping workload cluster kcp-upgrade-exgkgm/kcp-upgrade-ozk5bk logs
Oct 14 05:34:37.577: INFO: INFO: Collecting logs for node kcp-upgrade-ozk5bk-control-plane-txt8z in cluster kcp-upgrade-ozk5bk in namespace kcp-upgrade-exgkgm

Oct 14 05:36:46.908: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-ozk5bk-control-plane-txt8z

Failed to get logs for machine kcp-upgrade-ozk5bk-control-plane-t85xj, cluster kcp-upgrade-exgkgm/kcp-upgrade-ozk5bk: dialing public load balancer at kcp-upgrade-ozk5bk-c784fb7f.uksouth.cloudapp.azure.com: dial tcp 20.108.137.100:22: connect: connection timed out
Oct 14 05:36:48.189: INFO: INFO: Collecting logs for node kcp-upgrade-ozk5bk-md-0-s9c6d in cluster kcp-upgrade-ozk5bk in namespace kcp-upgrade-exgkgm

Oct 14 05:38:57.980: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-ozk5bk-md-0-s9c6d

Failed to get logs for machine kcp-upgrade-ozk5bk-md-0-bcf855b6c-twhdl, cluster kcp-upgrade-exgkgm/kcp-upgrade-ozk5bk: dialing public load balancer at kcp-upgrade-ozk5bk-c784fb7f.uksouth.cloudapp.azure.com: dial tcp 20.108.137.100:22: connect: connection timed out
STEP: Dumping workload cluster kcp-upgrade-exgkgm/kcp-upgrade-ozk5bk kube-system pod logs
STEP: Fetching kube-system pod logs took 952.806499ms
STEP: Dumping workload cluster kcp-upgrade-exgkgm/kcp-upgrade-ozk5bk Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-dshwq, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-798bj, container calico-node
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-ozk5bk-control-plane-txt8z, container kube-controller-manager
... skipping 8 lines ...
STEP: Fetching activity logs took 1.208314057s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-exgkgm" namespace
STEP: Deleting cluster kcp-upgrade-exgkgm/kcp-upgrade-ozk5bk
STEP: Deleting cluster kcp-upgrade-ozk5bk
INFO: Waiting for the Cluster kcp-upgrade-exgkgm/kcp-upgrade-ozk5bk to be deleted
STEP: Waiting for cluster kcp-upgrade-ozk5bk to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-vg9g4, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-dshwq, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-798bj, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-d8k74, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-exgkgm
STEP: Redacting sensitive information from logs


• [SLOW TEST:1408.225 seconds]
... skipping 92 lines ...
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-c1w0e9-control-plane-6tgvs, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-c1w0e9-control-plane-wfkqg, container kube-controller-manager
STEP: Dumping workload cluster kcp-upgrade-sbydli/kcp-upgrade-c1w0e9 Azure activity log
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-c1w0e9-control-plane-wfkqg, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-c1w0e9-control-plane-6tgvs, container kube-scheduler
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-rv2fn, container coredns
STEP: Got error while iterating over activity logs for resource group capz-e2e-er8wsy: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001074497s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-sbydli" namespace
STEP: Deleting cluster kcp-upgrade-sbydli/kcp-upgrade-c1w0e9
STEP: Deleting cluster kcp-upgrade-c1w0e9
INFO: Waiting for the Cluster kcp-upgrade-sbydli/kcp-upgrade-c1w0e9 to be deleted
STEP: Waiting for cluster kcp-upgrade-c1w0e9 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-c1w0e9-control-plane-th6sq, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-2l6cr, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-lgq84, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-9hcsj, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-c1w0e9-control-plane-wfkqg, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-c1w0e9-control-plane-th6sq, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-rv2fn, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-c1w0e9-control-plane-wfkqg, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-c1w0e9-control-plane-wfkqg, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-qlz4n, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-np6kl, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-c1w0e9-control-plane-6tgvs, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-wrdxq, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-c1w0e9-control-plane-6tgvs, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-95tvz, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-9q4lt, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-c1w0e9-control-plane-th6sq, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-c1w0e9-control-plane-th6sq, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-c1w0e9-control-plane-wfkqg, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-crwr8, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-c1w0e9-control-plane-6tgvs, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-54777, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-c1w0e9-control-plane-6tgvs, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-sbydli
STEP: Redacting sensitive information from logs


• [SLOW TEST:2161.578 seconds]
... skipping 53 lines ...
Oct 14 05:52:38.494: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-33efqf-control-plane-xwxg9

Oct 14 05:52:40.418: INFO: INFO: Collecting logs for node md-rollout-33efqf-md-0-kqhgv in cluster md-rollout-33efqf in namespace md-rollout-56hos0

Oct 14 05:54:52.326: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-33efqf-md-0-kqhgv

Failed to get logs for machine md-rollout-33efqf-md-0-6d796c7df9-n5xn9, cluster md-rollout-56hos0/md-rollout-33efqf: [dialing from control plane to target node at md-rollout-33efqf-md-0-kqhgv: ssh: rejected: connect failed (Connection timed out), failed to get boot diagnostics data: compute.VirtualMachinesClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ResourceNotFound" Message="The Resource 'Microsoft.Compute/virtualMachines/md-rollout-33efqf-md-0-kqhgv' under resource group 'capz-e2e-to1qsz' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"]
Oct 14 05:54:53.884: INFO: INFO: Collecting logs for node md-rollout-33efqf-md-0-74619c-pvhsr in cluster md-rollout-33efqf in namespace md-rollout-56hos0

Oct 14 05:55:04.565: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-33efqf-md-0-74619c-pvhsr

STEP: Dumping workload cluster md-rollout-56hos0/md-rollout-33efqf kube-system pod logs
STEP: Fetching kube-system pod logs took 997.449984ms
... skipping 12 lines ...
STEP: Fetching activity logs took 1.085054004s
STEP: Dumping all the Cluster API resources in the "md-rollout-56hos0" namespace
STEP: Deleting cluster md-rollout-56hos0/md-rollout-33efqf
STEP: Deleting cluster md-rollout-33efqf
INFO: Waiting for the Cluster md-rollout-56hos0/md-rollout-33efqf to be deleted
STEP: Waiting for cluster md-rollout-33efqf to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-f4cvf, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-md-rollout-33efqf-control-plane-xwxg9, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-crlxh, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-7mcrh, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-9ghld, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-4l2bw, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-rollout-33efqf-control-plane-xwxg9, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-rollout-33efqf-control-plane-xwxg9, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-md-rollout-33efqf-control-plane-xwxg9, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-rollout" test spec
INFO: Deleting namespace md-rollout-56hos0
STEP: Redacting sensitive information from logs


• [SLOW TEST:986.653 seconds]
... skipping 91 lines ...
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-acn8q6-control-plane-qn7pw, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-acn8q6-control-plane-7s59q, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-acn8q6-control-plane-svf7p, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-6pdjf, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-bhw6v, container calico-node
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-acn8q6-control-plane-svf7p, container kube-scheduler
STEP: Got error while iterating over activity logs for resource group capz-e2e-s9re2u: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000691397s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-fpemab" namespace
STEP: Deleting cluster kcp-upgrade-fpemab/kcp-upgrade-acn8q6
STEP: Deleting cluster kcp-upgrade-acn8q6
INFO: Waiting for the Cluster kcp-upgrade-fpemab/kcp-upgrade-acn8q6 to be deleted
STEP: Waiting for cluster kcp-upgrade-acn8q6 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-acn8q6-control-plane-svf7p, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-dg6vf, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-vq88d, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-acn8q6-control-plane-svf7p, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-8mgzs, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-acn8q6-control-plane-svf7p, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-acn8q6-control-plane-svf7p, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-fpemab
STEP: Redacting sensitive information from logs


• [SLOW TEST:1999.868 seconds]
... skipping 7 lines ...
Running the Cluster API E2E tests Running the self-hosted spec 
  Should pivot the bootstrap cluster to a self-hosted cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:103

STEP: Creating namespace "self-hosted" for hosting the cluster
Oct 14 05:58:24.179: INFO: starting to create namespace for hosting the "self-hosted" test spec
2021/10/14 05:58:24 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found
INFO: Creating namespace self-hosted
INFO: Creating event watcher for namespace "self-hosted"
STEP: Creating a workload cluster
INFO: Creating the workload cluster with name "self-hosted-8r0gm2" using the "(default)" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
INFO: clusterctl config cluster self-hosted-8r0gm2 --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 1 --worker-machine-count 1 --flavor (default)
... skipping 74 lines ...
STEP: Fetching activity logs took 517.78153ms
STEP: Dumping all the Cluster API resources in the "self-hosted" namespace
STEP: Deleting all clusters in the self-hosted namespace
STEP: Deleting cluster self-hosted-8r0gm2
INFO: Waiting for the Cluster self-hosted/self-hosted-8r0gm2 to be deleted
STEP: Waiting for cluster self-hosted-8r0gm2 to be deleted
INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-865c969d7-lxjj6, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-self-hosted-8r0gm2-control-plane-pvswl, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-self-hosted-8r0gm2-control-plane-pvswl, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-h2qbh, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-ndxzz, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-rmhb2, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-kkfvs, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-t5t46, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-gpvz5, container kube-proxy: http2: client connection lost
INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-6bdc78c4d4-s5zwl, container manager: http2: client connection lost
INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-6f769dcb5f-skvbn, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-self-hosted-8r0gm2-control-plane-pvswl, container kube-apiserver: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-86b5f554dd-xkgd5, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-cdr6g, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-self-hosted-8r0gm2-control-plane-pvswl, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "self-hosted" test spec
INFO: Deleting namespace self-hosted
STEP: Checking if any resources are left over in Azure for spec "self-hosted"
STEP: Redacting sensitive information from logs
STEP: Redacting sensitive information from logs

... skipping 241 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-6q7lk, container calico-node
STEP: Dumping workload cluster kcp-adoption-0tsnqp/kcp-adoption-ujy1c8 Azure activity log
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-adoption-ujy1c8-control-plane-0, container kube-scheduler
STEP: Creating log watcher for controller kube-system/etcd-kcp-adoption-ujy1c8-control-plane-0, container etcd
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-adoption-ujy1c8-control-plane-0, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-adoption-ujy1c8-control-plane-0, container kube-controller-manager
STEP: Error starting logs stream for pod kube-system/coredns-78fcd69978-qhb5p, container coredns: container "coredns" in pod "coredns-78fcd69978-qhb5p" is waiting to start: ContainerCreating
STEP: Error starting logs stream for pod kube-system/calico-kube-controllers-846b5f484d-cdnvm, container calico-kube-controllers: container "calico-kube-controllers" in pod "calico-kube-controllers-846b5f484d-cdnvm" is waiting to start: ContainerCreating
STEP: Fetching activity logs took 631.805517ms
STEP: Dumping all the Cluster API resources in the "kcp-adoption-0tsnqp" namespace
STEP: Deleting cluster kcp-adoption-0tsnqp/kcp-adoption-ujy1c8
STEP: Deleting cluster kcp-adoption-ujy1c8
INFO: Waiting for the Cluster kcp-adoption-0tsnqp/kcp-adoption-ujy1c8 to be deleted
STEP: Waiting for cluster kcp-adoption-ujy1c8 to be deleted
... skipping 76 lines ...
STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-4qkhz, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-zgrw5, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-7kp4z, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-8px96, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-lt8h8, container calico-node
STEP: Creating log watcher for controller kube-system/kube-scheduler-machine-pool-6l8qjh-control-plane-qcvkv, container kube-scheduler
STEP: Error starting logs stream for pod kube-system/calico-node-npnxr, container calico-node: pods "machine-pool-6l8qjh-mp-0000001" not found
STEP: Error starting logs stream for pod kube-system/calico-node-lt8h8, container calico-node: pods "machine-pool-6l8qjh-mp-0000000" not found
STEP: Error starting logs stream for pod kube-system/kube-proxy-zgrw5, container kube-proxy: pods "machine-pool-6l8qjh-mp-0000001" not found
STEP: Error starting logs stream for pod kube-system/kube-proxy-7kp4z, container kube-proxy: pods "machine-pool-6l8qjh-mp-0000000" not found
STEP: Fetching activity logs took 572.962031ms
STEP: Dumping all the Cluster API resources in the "machine-pool-88a00z" namespace
STEP: Deleting cluster machine-pool-88a00z/machine-pool-6l8qjh
STEP: Deleting cluster machine-pool-6l8qjh
INFO: Waiting for the Cluster machine-pool-88a00z/machine-pool-6l8qjh to be deleted
STEP: Waiting for cluster machine-pool-6l8qjh to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-4qkhz, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-machine-pool-6l8qjh-control-plane-qcvkv, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-5c48w, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-zcv2j, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-2lsst, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-machine-pool-6l8qjh-control-plane-qcvkv, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-machine-pool-6l8qjh-control-plane-qcvkv, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-8px96, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-machine-pool-6l8qjh-control-plane-qcvkv, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "machine-pool" test spec
INFO: Deleting namespace machine-pool-88a00z
STEP: Redacting sensitive information from logs


• [SLOW TEST:1073.450 seconds]
... skipping 73 lines ...
STEP: Fetching activity logs took 570.300759ms
STEP: Dumping all the Cluster API resources in the "md-scale-2gn998" namespace
STEP: Deleting cluster md-scale-2gn998/md-scale-21zqgo
STEP: Deleting cluster md-scale-21zqgo
INFO: Waiting for the Cluster md-scale-2gn998/md-scale-21zqgo to be deleted
STEP: Waiting for cluster md-scale-21zqgo to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-8d7lf, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-scale-21zqgo-control-plane-rjvx8, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-s2jhm, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-nfmwv, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-md-scale-21zqgo-control-plane-rjvx8, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-scale-21zqgo-control-plane-rjvx8, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-jxml6, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-mv2nr, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-6qcft, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-94x2v, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-md-scale-21zqgo-control-plane-rjvx8, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-scale" test spec
INFO: Deleting namespace md-scale-2gn998
STEP: Redacting sensitive information from logs


• [SLOW TEST:1090.487 seconds]
... skipping 222 lines ...
STEP: Creating log watcher for controller kube-system/etcd-clusterctl-upgrade-vxd6ep-control-plane-cjgcn, container etcd
STEP: Creating log watcher for controller kube-system/kube-apiserver-clusterctl-upgrade-vxd6ep-control-plane-cjgcn, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-controller-manager-clusterctl-upgrade-vxd6ep-control-plane-cjgcn, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-lhk6d, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-kxxqh, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-clusterctl-upgrade-vxd6ep-control-plane-cjgcn, container kube-scheduler
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 221.654791ms
STEP: Dumping all the Cluster API resources in the "clusterctl-upgrade-0dl725" namespace
STEP: Deleting cluster clusterctl-upgrade-0dl725/clusterctl-upgrade-vxd6ep
STEP: Deleting cluster clusterctl-upgrade-vxd6ep
INFO: Waiting for the Cluster clusterctl-upgrade-0dl725/clusterctl-upgrade-vxd6ep to be deleted
STEP: Waiting for cluster clusterctl-upgrade-vxd6ep to be deleted
INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-6bdc78c4d4-4fm69, container manager: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-86b5f554dd-fwks8, container manager: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-865c969d7-t96j4, container manager: http2: client connection lost
INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-6f769dcb5f-cgx5l, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-cfl78, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-lhk6d, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "clusterctl-upgrade" test spec
INFO: Deleting namespace clusterctl-upgrade-0dl725
STEP: Redacting sensitive information from logs


• [SLOW TEST:2009.435 seconds]
... skipping 7 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck [It] Should successfully trigger KCP remediation 
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/framework/controlplane_helpers.go:108

Ran 13 of 23 Specs in 6600.172 seconds
FAIL! -- 12 Passed | 1 Failed | 0 Pending | 10 Skipped


Ginkgo ran 1 suite in 1h51m29.916774598s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
  - For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta
  - To comment, chime in at https://github.com/onsi/ginkgo/issues/711

To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...