This job view page is being replaced by Spyglass soon. Check out the new job view.
PRjsturtevant: Fix Capi e2e tests by upgrading from 1.22
ResultFAILURE
Tests 1 failed / 11 succeeded
Started2021-10-26 02:45
Elapsed2h31m
Revisionddda1124dc9f4ce6b89fc8e80dd27a9b603b08a4
Refs 1792

Test Failures


capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster 31m6s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sRunning\sthe\sCluster\sAPI\sE2E\stests\sRunning\sthe\sself\-hosted\sspec\sShould\spivot\sthe\sbootstrap\scluster\sto\sa\sself\-hosted\scluster$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:103
failed to run clusterctl init
Unexpected error:
    <*errors.errorString | 0xc00032b3e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/framework/clusterctl/client.go:85
				
				Click to see stdout/stderrfrom junit.e2e_suite.3.xml

Filter through log files | View test history on testgrid


Show 11 Passed Tests

Show 10 Skipped Tests

Error lines from build-log.txt

... skipping 475 lines ...
Oct 26 03:01:08.444: INFO: INFO: Collecting boot logs for AzureMachine quick-start-g17rmk-md-0-lq2gx

Oct 26 03:01:08.739: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster quick-start-g17rmk in namespace quick-start-cd9zi6

Oct 26 03:01:39.750: INFO: INFO: Collecting boot logs for AzureMachine quick-start-g17rmk-md-win-zfslc

Failed to get logs for machine quick-start-g17rmk-md-win-b9f89766-9x77h, cluster quick-start-cd9zi6/quick-start-g17rmk: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Oct 26 03:01:40.178: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster quick-start-g17rmk in namespace quick-start-cd9zi6

Oct 26 03:02:11.507: INFO: INFO: Collecting boot logs for AzureMachine quick-start-g17rmk-md-win-4dqhg

Failed to get logs for machine quick-start-g17rmk-md-win-b9f89766-dmk5q, cluster quick-start-cd9zi6/quick-start-g17rmk: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster quick-start-cd9zi6/quick-start-g17rmk kube-system pod logs
STEP: Fetching kube-system pod logs took 321.804779ms
STEP: Dumping workload cluster quick-start-cd9zi6/quick-start-g17rmk Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-windows-vnbkq, container calico-node-startup
STEP: Creating log watcher for controller kube-system/calico-node-cr7cs, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-v9q8v, container coredns
... skipping 14 lines ...
STEP: Fetching activity logs took 796.732381ms
STEP: Dumping all the Cluster API resources in the "quick-start-cd9zi6" namespace
STEP: Deleting cluster quick-start-cd9zi6/quick-start-g17rmk
STEP: Deleting cluster quick-start-g17rmk
INFO: Waiting for the Cluster quick-start-cd9zi6/quick-start-g17rmk to be deleted
STEP: Waiting for cluster quick-start-g17rmk to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-lkvtb, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-lkvtb, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-vnbkq, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-quick-start-g17rmk-control-plane-kxp7f, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-quick-start-g17rmk-control-plane-kxp7f, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-jjngb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-zfzhb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-brcmr, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-quick-start-g17rmk-control-plane-kxp7f, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-v9q8v, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-quick-start-g17rmk-control-plane-kxp7f, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5fp9h, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-vnbkq, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-cr7cs, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-plgnm, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "quick-start" test spec
INFO: Deleting namespace quick-start-cd9zi6
STEP: Redacting sensitive information from logs


• [SLOW TEST:897.351 seconds]
... skipping 56 lines ...
STEP: Dumping logs from the "kcp-upgrade-3acuvs" workload cluster
STEP: Dumping workload cluster kcp-upgrade-5no93t/kcp-upgrade-3acuvs logs
Oct 26 03:08:31.472: INFO: INFO: Collecting logs for node kcp-upgrade-3acuvs-control-plane-4c6rp in cluster kcp-upgrade-3acuvs in namespace kcp-upgrade-5no93t

Oct 26 03:10:42.411: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-3acuvs-control-plane-4c6rp

Failed to get logs for machine kcp-upgrade-3acuvs-control-plane-hvtrw, cluster kcp-upgrade-5no93t/kcp-upgrade-3acuvs: dialing public load balancer at kcp-upgrade-3acuvs-b230681e.eastus.cloudapp.azure.com: dial tcp 52.191.233.8:22: connect: connection timed out
Oct 26 03:10:43.257: INFO: INFO: Collecting logs for node kcp-upgrade-3acuvs-md-0-jvr5f in cluster kcp-upgrade-3acuvs in namespace kcp-upgrade-5no93t

Oct 26 03:12:53.479: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-3acuvs-md-0-jvr5f

Failed to get logs for machine kcp-upgrade-3acuvs-md-0-596ff656d-2fpb5, cluster kcp-upgrade-5no93t/kcp-upgrade-3acuvs: dialing public load balancer at kcp-upgrade-3acuvs-b230681e.eastus.cloudapp.azure.com: dial tcp 52.191.233.8:22: connect: connection timed out
Oct 26 03:12:54.312: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster kcp-upgrade-3acuvs in namespace kcp-upgrade-5no93t

Oct 26 03:19:26.699: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-3acuvs-md-win-vv9l8

Failed to get logs for machine kcp-upgrade-3acuvs-md-win-6865f55cdc-ffzn2, cluster kcp-upgrade-5no93t/kcp-upgrade-3acuvs: dialing public load balancer at kcp-upgrade-3acuvs-b230681e.eastus.cloudapp.azure.com: dial tcp 52.191.233.8:22: connect: connection timed out
Oct 26 03:19:27.541: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster kcp-upgrade-3acuvs in namespace kcp-upgrade-5no93t

Oct 26 03:25:59.912: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-3acuvs-md-win-brpwb

Failed to get logs for machine kcp-upgrade-3acuvs-md-win-6865f55cdc-mgtt2, cluster kcp-upgrade-5no93t/kcp-upgrade-3acuvs: dialing public load balancer at kcp-upgrade-3acuvs-b230681e.eastus.cloudapp.azure.com: dial tcp 52.191.233.8:22: connect: connection timed out
STEP: Dumping workload cluster kcp-upgrade-5no93t/kcp-upgrade-3acuvs kube-system pod logs
STEP: Fetching kube-system pod logs took 353.239288ms
STEP: Creating log watcher for controller kube-system/calico-node-windows-ggmxb, container calico-node-startup
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-3acuvs-control-plane-4c6rp, container etcd
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-3acuvs-control-plane-4c6rp, container kube-scheduler
STEP: Dumping workload cluster kcp-upgrade-5no93t/kcp-upgrade-3acuvs Azure activity log
... skipping 8 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-fp4rg, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-xxggv, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-7fqfp, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-windows-ggmxb, container calico-node-felix
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-3acuvs-control-plane-4c6rp, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-3acuvs-control-plane-4c6rp, container kube-controller-manager
STEP: Got error while iterating over activity logs for resource group capz-e2e-a2he3u: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000748753s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-5no93t" namespace
STEP: Deleting cluster kcp-upgrade-5no93t/kcp-upgrade-3acuvs
STEP: Deleting cluster kcp-upgrade-3acuvs
INFO: Waiting for the Cluster kcp-upgrade-5no93t/kcp-upgrade-3acuvs to be deleted
STEP: Waiting for cluster kcp-upgrade-3acuvs to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-ggmxb, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-7fqfp, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-lk5hf, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-3acuvs-control-plane-4c6rp, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-3acuvs-control-plane-4c6rp, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-pztwh, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-3acuvs-control-plane-4c6rp, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-7krqr, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-vn8rf, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-wnjv2, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-xxggv, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-6jtck, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-3acuvs-control-plane-4c6rp, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-fp4rg, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-q522j, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-ggmxb, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-vn8rf, container calico-node-startup: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-5no93t
STEP: Redacting sensitive information from logs


• [SLOW TEST:2369.669 seconds]
... skipping 74 lines ...
Oct 26 03:26:25.445: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-ohhig5-md-0-9df2c

Oct 26 03:26:25.746: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster kcp-upgrade-ohhig5 in namespace kcp-upgrade-vg0apb

Oct 26 03:26:59.777: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-ohhig5-md-win-558z8

Failed to get logs for machine kcp-upgrade-ohhig5-md-win-65588688b8-8crdj, cluster kcp-upgrade-vg0apb/kcp-upgrade-ohhig5: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Oct 26 03:27:00.069: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster kcp-upgrade-ohhig5 in namespace kcp-upgrade-vg0apb

Oct 26 03:27:34.132: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-ohhig5-md-win-szspv

Failed to get logs for machine kcp-upgrade-ohhig5-md-win-65588688b8-cwt47, cluster kcp-upgrade-vg0apb/kcp-upgrade-ohhig5: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster kcp-upgrade-vg0apb/kcp-upgrade-ohhig5 kube-system pod logs
STEP: Fetching kube-system pod logs took 310.592224ms
STEP: Dumping workload cluster kcp-upgrade-vg0apb/kcp-upgrade-ohhig5 Azure activity log
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-ohhig5-control-plane-xqccq, container etcd
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-ohhig5-control-plane-6kssc, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-r5dlw, container kube-proxy
... skipping 20 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-4t65c, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-t7jkp, container calico-node
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-ohhig5-control-plane-6kssc, container kube-scheduler
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-tbxnt, container coredns
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-ohhig5-control-plane-hgfdx, container etcd
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-ohhig5-control-plane-6kssc, container kube-controller-manager
STEP: Got error while iterating over activity logs for resource group capz-e2e-7oxg7g: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.00109565s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-vg0apb" namespace
STEP: Deleting cluster kcp-upgrade-vg0apb/kcp-upgrade-ohhig5
STEP: Deleting cluster kcp-upgrade-ohhig5
INFO: Waiting for the Cluster kcp-upgrade-vg0apb/kcp-upgrade-ohhig5 to be deleted
STEP: Waiting for cluster kcp-upgrade-ohhig5 to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-ohhig5-control-plane-6kssc, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-tbxnt, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-ohhig5-control-plane-6kssc, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-8bq4k, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-ohhig5-control-plane-xqccq, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-ohhig5-control-plane-6kssc, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-glk4d, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-t7xvd, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-t7xvd, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-mmpxn, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-plbn6, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-ohhig5-control-plane-hgfdx, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-4t65c, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-9xlhx, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-ohhig5-control-plane-xqccq, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-r5dlw, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-ohhig5-control-plane-xqccq, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-77tks, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-8bq4k, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-ohhig5-control-plane-6kssc, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-ohhig5-control-plane-hgfdx, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-ohhig5-control-plane-hgfdx, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-sltgf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-n6zdm, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-vns4p, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-xzkr5, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-t7jkp, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-ohhig5-control-plane-hgfdx, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-ohhig5-control-plane-xqccq, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-vg0apb
STEP: Redacting sensitive information from logs


• [SLOW TEST:2590.492 seconds]
... skipping 91 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-sbj88, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-hfj9k, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-byktvy-control-plane-mgcfr, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-byktvy-control-plane-4fnvl, container kube-scheduler
STEP: Fetching kube-system pod logs took 300.103909ms
STEP: Dumping workload cluster kcp-upgrade-pctlcz/kcp-upgrade-byktvy Azure activity log
STEP: Got error while iterating over activity logs for resource group capz-e2e-j8iihj: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000465783s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-pctlcz" namespace
STEP: Deleting cluster kcp-upgrade-pctlcz/kcp-upgrade-byktvy
STEP: Deleting cluster kcp-upgrade-byktvy
INFO: Waiting for the Cluster kcp-upgrade-pctlcz/kcp-upgrade-byktvy to be deleted
STEP: Waiting for cluster kcp-upgrade-byktvy to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-byktvy-control-plane-mgcfr, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-4msw8, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-wlwrf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-5x8jx, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-byktvy-control-plane-4fnvl, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-byktvy-control-plane-mgcfr, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-byktvy-control-plane-4fnvl, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-nzm6j, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-lwvql, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-8pmkg, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-byktvy-control-plane-mgcfr, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-byktvy-control-plane-4fnvl, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-hfj9k, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-byktvy-control-plane-mgcfr, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-byktvy-control-plane-4fnvl, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-pctlcz
STEP: Redacting sensitive information from logs


• [SLOW TEST:2029.848 seconds]
... skipping 74 lines ...
STEP: Fetching activity logs took 635.408391ms
STEP: Dumping all the Cluster API resources in the "mhc-remediation-57e17d" namespace
STEP: Deleting cluster mhc-remediation-57e17d/mhc-remediation-428wwe
STEP: Deleting cluster mhc-remediation-428wwe
INFO: Waiting for the Cluster mhc-remediation-57e17d/mhc-remediation-428wwe to be deleted
STEP: Waiting for cluster mhc-remediation-428wwe to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-428wwe-control-plane-wrs9p, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-428wwe-control-plane-wrs9p, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-cz459, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-dxm2g, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-428wwe-control-plane-wrs9p, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-ng9wd, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-5p5g2, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-bvrdl, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-428wwe-control-plane-wrs9p, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
INFO: Deleting namespace mhc-remediation-57e17d
STEP: Redacting sensitive information from logs


• [SLOW TEST:1152.044 seconds]
... skipping 7 lines ...
Running the Cluster API E2E tests Running the self-hosted spec 
  Should pivot the bootstrap cluster to a self-hosted cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:103

STEP: Creating namespace "self-hosted" for hosting the cluster
Oct 26 03:36:03.429: INFO: starting to create namespace for hosting the "self-hosted" test spec
2021/10/26 03:36:03 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found
INFO: Creating namespace self-hosted
INFO: Creating event watcher for namespace "self-hosted"
STEP: Creating a workload cluster
INFO: Creating the workload cluster with name "self-hosted-g5jmzy" using the "(default)" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
INFO: clusterctl config cluster self-hosted-g5jmzy --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 1 --worker-machine-count 1 --flavor (default)
... skipping 42 lines ...
Oct 26 03:53:58.440: INFO: INFO: Collecting boot logs for AzureMachine self-hosted-g5jmzy-md-0-wqq9d

Oct 26 03:53:58.773: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster self-hosted-g5jmzy in namespace self-hosted

Oct 26 03:54:43.129: INFO: INFO: Collecting boot logs for AzureMachine self-hosted-g5jmzy-md-win-vxcbt

Failed to get logs for machine self-hosted-g5jmzy-md-win-7c9994c6f9-8gtbn, cluster self-hosted/self-hosted-g5jmzy: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Oct 26 03:54:43.473: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster self-hosted-g5jmzy in namespace self-hosted

Oct 26 03:55:14.152: INFO: INFO: Collecting boot logs for AzureMachine self-hosted-g5jmzy-md-win-5qs9v

Failed to get logs for machine self-hosted-g5jmzy-md-win-7c9994c6f9-bg9zs, cluster self-hosted/self-hosted-g5jmzy: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster self-hosted/self-hosted-g5jmzy kube-system pod logs
STEP: Fetching kube-system pod logs took 326.28986ms
STEP: Creating log watcher for controller kube-system/calico-node-windows-g8pwf, container calico-node-startup
STEP: Creating log watcher for controller kube-system/etcd-self-hosted-g5jmzy-control-plane-j5gvn, container etcd
STEP: Creating log watcher for controller kube-system/calico-node-windows-9vzxk, container calico-node-startup
STEP: Creating log watcher for controller kube-system/calico-node-windows-g8pwf, container calico-node-felix
... skipping 14 lines ...
STEP: Fetching activity logs took 1.485012545s
STEP: Dumping all the Cluster API resources in the "self-hosted" namespace
STEP: Deleting all clusters in the self-hosted namespace
STEP: Deleting cluster self-hosted-g5jmzy
INFO: Waiting for the Cluster self-hosted/self-hosted-g5jmzy to be deleted
STEP: Waiting for cluster self-hosted-g5jmzy to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-9vzxk, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-z2vgk, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-4hbbq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-self-hosted-g5jmzy-control-plane-j5gvn, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-self-hosted-g5jmzy-control-plane-j5gvn, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-j9mq6, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-7kdzd, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-ssgsk, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-fww49, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-shs4n, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-9vzxk, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-self-hosted-g5jmzy-control-plane-j5gvn, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-g8pwf, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-rxt2g, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-nj2wx, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-self-hosted-g5jmzy-control-plane-j5gvn, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-g8pwf, container calico-node-felix: http2: client connection lost
STEP: Deleting namespace used for hosting the "self-hosted" test spec
INFO: Deleting namespace self-hosted
STEP: Checking if any resources are left over in Azure for spec "self-hosted"
STEP: Redacting sensitive information from logs
STEP: Redacting sensitive information from logs

... skipping 3 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:40
  Running the self-hosted spec
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:159
    Should pivot the bootstrap cluster to a self-hosted cluster [It]
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:103

    failed to run clusterctl init
    Unexpected error:
        <*errors.errorString | 0xc00032b3e0>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred

... skipping 97 lines ...
Oct 26 03:46:20.181: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-4lwxfr-md-0-7duvba-n2s88

Oct 26 03:46:20.429: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster md-rollout-4lwxfr in namespace md-rollout-uvakem

Oct 26 03:48:08.494: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-4lwxfr-md-win-wgxws

Failed to get logs for machine md-rollout-4lwxfr-md-win-7d6cb64cd9-8p9z6, cluster md-rollout-uvakem/md-rollout-4lwxfr: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Oct 26 03:48:09.342: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster md-rollout-4lwxfr in namespace md-rollout-uvakem

Oct 26 03:48:52.671: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-4lwxfr-md-win-b6rbl

Failed to get logs for machine md-rollout-4lwxfr-md-win-7d6cb64cd9-ptjfw, cluster md-rollout-uvakem/md-rollout-4lwxfr: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Oct 26 03:48:53.007: INFO: INFO: Collecting logs for node 10.1.0.8 in cluster md-rollout-4lwxfr in namespace md-rollout-uvakem

Oct 26 03:49:19.215: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-4lwxfr-md-win-3wmqaw-82twk

Failed to get logs for machine md-rollout-4lwxfr-md-win-7d7b5bdcb-rkfxb, cluster md-rollout-uvakem/md-rollout-4lwxfr: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster md-rollout-uvakem/md-rollout-4lwxfr kube-system pod logs
STEP: Fetching kube-system pod logs took 319.911137ms
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-z4ldx, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-scheduler-md-rollout-4lwxfr-control-plane-5z47x, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-node-windows-nxggr, container calico-node-startup
STEP: Creating log watcher for controller kube-system/calico-node-windows-dvn2c, container calico-node-felix
... skipping 17 lines ...
STEP: Fetching activity logs took 1.143535335s
STEP: Dumping all the Cluster API resources in the "md-rollout-uvakem" namespace
STEP: Deleting cluster md-rollout-uvakem/md-rollout-4lwxfr
STEP: Deleting cluster md-rollout-4lwxfr
INFO: Waiting for the Cluster md-rollout-uvakem/md-rollout-4lwxfr to be deleted
STEP: Waiting for cluster md-rollout-4lwxfr to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-rollout-4lwxfr-control-plane-5z47x, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-jh27s, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-rollout-4lwxfr-control-plane-5z47x, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-546rf, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-xrz6s, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-cj2sl, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-5hlfr, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-8p8xh, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-546rf, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-md-rollout-4lwxfr-control-plane-5z47x, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-md-rollout-4lwxfr-control-plane-5z47x, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-z4ldx, container calico-kube-controllers: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-rollout" test spec
INFO: Deleting namespace md-rollout-uvakem
STEP: Redacting sensitive information from logs


• [SLOW TEST:2619.826 seconds]
... skipping 58 lines ...
STEP: Fetching activity logs took 563.158958ms
STEP: Dumping all the Cluster API resources in the "kcp-adoption-8o3bqp" namespace
STEP: Deleting cluster kcp-adoption-8o3bqp/kcp-adoption-rugzpj
STEP: Deleting cluster kcp-adoption-rugzpj
INFO: Waiting for the Cluster kcp-adoption-8o3bqp/kcp-adoption-rugzpj to be deleted
STEP: Waiting for cluster kcp-adoption-rugzpj to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-adoption-rugzpj-control-plane-0, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-vhv4k, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-adoption-rugzpj-control-plane-0, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-bv4m9, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-adoption-rugzpj-control-plane-0, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-mght5, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-4hghk, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-adoption-rugzpj-control-plane-0, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-wtdw9, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-adoption" test spec
INFO: Deleting namespace kcp-adoption-8o3bqp
STEP: Redacting sensitive information from logs


• [SLOW TEST:817.315 seconds]
... skipping 90 lines ...
STEP: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-4bxbn1-control-plane-h5hv8, container kube-scheduler
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-nm7s8, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-wnhkp, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-t42ks, container calico-node
STEP: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-4bxbn1-control-plane-hjn4g, container kube-scheduler
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-g6rnd, container coredns
STEP: Got error while iterating over activity logs for resource group capz-e2e-6awotj: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000859577s
STEP: Dumping all the Cluster API resources in the "mhc-remediation-f57f2e" namespace
STEP: Deleting cluster mhc-remediation-f57f2e/mhc-remediation-4bxbn1
STEP: Deleting cluster mhc-remediation-4bxbn1
INFO: Waiting for the Cluster mhc-remediation-f57f2e/mhc-remediation-4bxbn1 to be deleted
STEP: Waiting for cluster mhc-remediation-4bxbn1 to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-t42ks, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-4bxbn1-control-plane-xpbjf, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-4bxbn1-control-plane-h5hv8, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-4bxbn1-control-plane-hjn4g, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-4bxbn1-control-plane-hjn4g, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-4bxbn1-control-plane-hjn4g, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-4bxbn1-control-plane-h5hv8, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-q6cnj, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-wnhkp, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-nm7s8, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-w9n24, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-4bxbn1-control-plane-xpbjf, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-4bxbn1-control-plane-xpbjf, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-4bxbn1-control-plane-hjn4g, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-sdzkn, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-npv8f, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-8fhbw, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-4bxbn1-control-plane-h5hv8, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-4bxbn1-control-plane-xpbjf, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-g6rnd, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-4bxbn1-control-plane-h5hv8, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
INFO: Deleting namespace mhc-remediation-f57f2e
STEP: Redacting sensitive information from logs


• [SLOW TEST:2740.627 seconds]
... skipping 61 lines ...
Oct 26 04:38:32.962: INFO: INFO: Collecting boot logs for AzureMachine md-scale-xmifmh-md-0-kbwfk

Oct 26 04:38:33.404: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster md-scale-xmifmh in namespace md-scale-kwqkbl

Oct 26 04:39:57.871: INFO: INFO: Collecting boot logs for AzureMachine md-scale-xmifmh-md-win-tgvcl

Failed to get logs for machine md-scale-xmifmh-md-win-674c88bb94-58klz, cluster md-scale-kwqkbl/md-scale-xmifmh: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Oct 26 04:39:58.171: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster md-scale-xmifmh in namespace md-scale-kwqkbl

Oct 26 04:40:39.809: INFO: INFO: Collecting boot logs for AzureMachine md-scale-xmifmh-md-win-6kwfp

Failed to get logs for machine md-scale-xmifmh-md-win-674c88bb94-f9q2w, cluster md-scale-kwqkbl/md-scale-xmifmh: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster md-scale-kwqkbl/md-scale-xmifmh kube-system pod logs
STEP: Fetching kube-system pod logs took 348.256132ms
STEP: Dumping workload cluster md-scale-kwqkbl/md-scale-xmifmh Azure activity log
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-n5j7x, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/calico-node-bmpwc, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-rwdtc, container calico-node
... skipping 14 lines ...
STEP: Fetching activity logs took 1.473402719s
STEP: Dumping all the Cluster API resources in the "md-scale-kwqkbl" namespace
STEP: Deleting cluster md-scale-kwqkbl/md-scale-xmifmh
STEP: Deleting cluster md-scale-xmifmh
INFO: Waiting for the Cluster md-scale-kwqkbl/md-scale-xmifmh to be deleted
STEP: Waiting for cluster md-scale-xmifmh to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-rvgzr, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-f2klt, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-gpwc5, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-zbp9f, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-t45z5, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-mwqrc, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-scale-xmifmh-control-plane-9w4lz, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-md-scale-xmifmh-control-plane-9w4lz, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-n5j7x, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-zbp9f, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-md-scale-xmifmh-control-plane-9w4lz, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-zdk9z, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-bmpwc, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-rwdtc, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-scale-xmifmh-control-plane-9w4lz, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-gpwc5, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-zqqrh, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-scale" test spec
INFO: Deleting namespace md-scale-kwqkbl
STEP: Redacting sensitive information from logs


• [SLOW TEST:1594.084 seconds]
... skipping 60 lines ...
Oct 26 04:43:28.707: INFO: INFO: Collecting boot logs for AzureMachine machine-pool-n23std-control-plane-dtsk6

Oct 26 04:43:29.491: INFO: INFO: Collecting logs for node win-p-win000002 in cluster machine-pool-n23std in namespace machine-pool-6ov7fv

Oct 26 04:43:45.559: INFO: INFO: Collecting boot logs for VMSS instance 2 of scale set machine-pool-n23std-mp-0

Failed to get logs for machine pool machine-pool-n23std-mp-0, cluster machine-pool-6ov7fv/machine-pool-n23std: [running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1]
Oct 26 04:43:45.860: INFO: INFO: Collecting logs for node win-p-win000002 in cluster machine-pool-n23std in namespace machine-pool-6ov7fv

Oct 26 04:44:18.899: INFO: INFO: Collecting boot logs for VMSS instance 2 of scale set win-p-win

Failed to get logs for machine pool machine-pool-n23std-mp-win, cluster machine-pool-6ov7fv/machine-pool-n23std: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster machine-pool-6ov7fv/machine-pool-n23std kube-system pod logs
STEP: Fetching kube-system pod logs took 331.40537ms
STEP: Creating log watcher for controller kube-system/calico-node-windows-xpfhh, container calico-node-startup
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-ncmtl, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-apiserver-machine-pool-n23std-control-plane-dtsk6, container kube-apiserver
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-g7f27, container coredns
... skipping 9 lines ...
STEP: Creating log watcher for controller kube-system/etcd-machine-pool-n23std-control-plane-dtsk6, container etcd
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-f7f2s, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-proxy-r58wc, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-qdgrc, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-nvxbw, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-p8jg6, container calico-node
STEP: Error starting logs stream for pod kube-system/calico-node-jzp9q, container calico-node: pods "machine-pool-n23std-mp-0000001" not found
STEP: Error starting logs stream for pod kube-system/kube-proxy-nvxbw, container kube-proxy: pods "machine-pool-n23std-mp-0000000" not found
STEP: Error starting logs stream for pod kube-system/kube-proxy-r58wc, container kube-proxy: pods "machine-pool-n23std-mp-0000001" not found
STEP: Error starting logs stream for pod kube-system/calico-node-p8jg6, container calico-node: pods "machine-pool-n23std-mp-0000000" not found
STEP: Fetching activity logs took 543.899652ms
STEP: Dumping all the Cluster API resources in the "machine-pool-6ov7fv" namespace
STEP: Deleting cluster machine-pool-6ov7fv/machine-pool-n23std
STEP: Deleting cluster machine-pool-n23std
INFO: Waiting for the Cluster machine-pool-6ov7fv/machine-pool-n23std to be deleted
STEP: Waiting for cluster machine-pool-n23std to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-machine-pool-n23std-control-plane-dtsk6, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-machine-pool-n23std-control-plane-dtsk6, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-q668m, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-mhw78, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-xpfhh, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-xpfhh, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-g7f27, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-2lc6l, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-machine-pool-n23std-control-plane-dtsk6, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-67l99, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-f7f2s, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-machine-pool-n23std-control-plane-dtsk6, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-qdgrc, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-ncmtl, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "machine-pool" test spec
INFO: Deleting namespace machine-pool-6ov7fv
STEP: Redacting sensitive information from logs


• [SLOW TEST:2149.880 seconds]
... skipping 56 lines ...
STEP: Dumping logs from the "node-drain-rreq9p" workload cluster
STEP: Dumping workload cluster node-drain-xlt0px/node-drain-rreq9p logs
Oct 26 05:06:58.467: INFO: INFO: Collecting logs for node node-drain-rreq9p-control-plane-zrdjs in cluster node-drain-rreq9p in namespace node-drain-xlt0px

Oct 26 05:09:08.968: INFO: INFO: Collecting boot logs for AzureMachine node-drain-rreq9p-control-plane-zrdjs

Failed to get logs for machine node-drain-rreq9p-control-plane-tt8rw, cluster node-drain-xlt0px/node-drain-rreq9p: dialing public load balancer at node-drain-rreq9p-ffdaa26a.eastus.cloudapp.azure.com: dial tcp 52.191.35.62:22: connect: connection timed out
STEP: Dumping workload cluster node-drain-xlt0px/node-drain-rreq9p kube-system pod logs
STEP: Fetching kube-system pod logs took 319.821364ms
STEP: Dumping workload cluster node-drain-xlt0px/node-drain-rreq9p Azure activity log
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-5w9fq, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/etcd-node-drain-rreq9p-control-plane-zrdjs, container etcd
STEP: Creating log watcher for controller kube-system/kube-apiserver-node-drain-rreq9p-control-plane-zrdjs, container kube-apiserver
... skipping 25 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Running the Cluster API E2E tests Running the self-hosted spec [It] Should pivot the bootstrap cluster to a self-hosted cluster 
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/framework/clusterctl/client.go:85

Ran 12 of 22 Specs in 8691.792 seconds
FAIL! -- 11 Passed | 1 Failed | 0 Pending | 10 Skipped


Ginkgo ran 1 suite in 2h26m14.695169312s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
  - For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta
  - To comment, chime in at https://github.com/onsi/ginkgo/issues/711

To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...