This job view page is being replaced by Spyglass soon. Check out the new job view.
PRjsturtevant: Fix Capi e2e tests by upgrading from 1.22
ResultFAILURE
Tests 1 failed / 11 succeeded
Started2021-10-25 23:33
Elapsed2h33m
Revisionddda1124dc9f4ce6b89fc8e80dd27a9b603b08a4
Refs 1792

Test Failures


capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster 28m8s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sRunning\sthe\sCluster\sAPI\sE2E\stests\sRunning\sthe\sself\-hosted\sspec\sShould\spivot\sthe\sbootstrap\scluster\sto\sa\sself\-hosted\scluster$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:103
failed to run clusterctl init
Unexpected error:
    <*errors.errorString | 0xc00032f430>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/framework/clusterctl/client.go:85
				
				Click to see stdout/stderrfrom junit.e2e_suite.2.xml

Filter through log files | View test history on testgrid


Show 11 Passed Tests

Show 10 Skipped Tests

Error lines from build-log.txt

... skipping 473 lines ...
Oct 25 23:50:07.201: INFO: INFO: Collecting boot logs for AzureMachine quick-start-nxr4ad-md-0-6n8rb

Oct 25 23:50:08.067: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster quick-start-nxr4ad in namespace quick-start-jf07dr

Oct 25 23:50:39.726: INFO: INFO: Collecting boot logs for AzureMachine quick-start-nxr4ad-md-win-h488x

Failed to get logs for machine quick-start-nxr4ad-md-win-cc8ff7c4-2h9jg, cluster quick-start-jf07dr/quick-start-nxr4ad: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Oct 25 23:50:40.407: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster quick-start-nxr4ad in namespace quick-start-jf07dr

Oct 25 23:51:20.160: INFO: INFO: Collecting boot logs for AzureMachine quick-start-nxr4ad-md-win-wgbg4

Failed to get logs for machine quick-start-nxr4ad-md-win-cc8ff7c4-g7n59, cluster quick-start-jf07dr/quick-start-nxr4ad: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster quick-start-jf07dr/quick-start-nxr4ad kube-system pod logs
STEP: Fetching kube-system pod logs took 987.704047ms
STEP: Dumping workload cluster quick-start-jf07dr/quick-start-nxr4ad Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-windows-gc9n8, container calico-node-startup
STEP: Creating log watcher for controller kube-system/calico-node-windows-znttj, container calico-node-startup
STEP: Creating log watcher for controller kube-system/kube-apiserver-quick-start-nxr4ad-control-plane-ht58c, container kube-apiserver
... skipping 14 lines ...
STEP: Fetching activity logs took 538.13745ms
STEP: Dumping all the Cluster API resources in the "quick-start-jf07dr" namespace
STEP: Deleting cluster quick-start-jf07dr/quick-start-nxr4ad
STEP: Deleting cluster quick-start-nxr4ad
INFO: Waiting for the Cluster quick-start-jf07dr/quick-start-nxr4ad to be deleted
STEP: Waiting for cluster quick-start-nxr4ad to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-chkqs, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vmd4p, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-znttj, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-quick-start-nxr4ad-control-plane-ht58c, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-gc9n8, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-quick-start-nxr4ad-control-plane-ht58c, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-quick-start-nxr4ad-control-plane-ht58c, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-bf5j5, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-gc9n8, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-wwvzk, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-c5wcn, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-fgpt7, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-wt5zd, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-znttj, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-quick-start-nxr4ad-control-plane-ht58c, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-4vn2v, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-76hdd, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "quick-start" test spec
INFO: Deleting namespace quick-start-jf07dr
STEP: Redacting sensitive information from logs


• [SLOW TEST:1462.944 seconds]
... skipping 74 lines ...
Oct 26 00:12:51.206: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-yez1rf-md-0-frcwj

Oct 26 00:12:51.808: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster kcp-upgrade-yez1rf in namespace kcp-upgrade-pm3vhr

Oct 26 00:13:18.705: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-yez1rf-md-win-g4tsv

Failed to get logs for machine kcp-upgrade-yez1rf-md-win-6bdd686879-wlpgk, cluster kcp-upgrade-pm3vhr/kcp-upgrade-yez1rf: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Oct 26 00:13:19.176: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster kcp-upgrade-yez1rf in namespace kcp-upgrade-pm3vhr

Oct 26 00:13:46.478: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-yez1rf-md-win-hj9l9

Failed to get logs for machine kcp-upgrade-yez1rf-md-win-6bdd686879-wx55g, cluster kcp-upgrade-pm3vhr/kcp-upgrade-yez1rf: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster kcp-upgrade-pm3vhr/kcp-upgrade-yez1rf kube-system pod logs
STEP: Fetching kube-system pod logs took 864.407664ms
STEP: Dumping workload cluster kcp-upgrade-pm3vhr/kcp-upgrade-yez1rf Azure activity log
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-q7bld, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-tkm5p, container coredns
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-yez1rf-control-plane-4lw44, container kube-controller-manager
... skipping 20 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-windows-lsq2n, container calico-node-startup
STEP: Creating log watcher for controller kube-system/kube-proxy-7vlwh, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-d8wnn, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-yez1rf-control-plane-hlck6, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-yez1rf-control-plane-wv5x7, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-node-windows-lsq2n, container calico-node-felix
STEP: Got error while iterating over activity logs for resource group capz-e2e-qtzjk3: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000610843s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-pm3vhr" namespace
STEP: Deleting cluster kcp-upgrade-pm3vhr/kcp-upgrade-yez1rf
STEP: Deleting cluster kcp-upgrade-yez1rf
INFO: Waiting for the Cluster kcp-upgrade-pm3vhr/kcp-upgrade-yez1rf to be deleted
STEP: Waiting for cluster kcp-upgrade-yez1rf to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-zwdvz, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-yez1rf-control-plane-hlck6, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-7vlwh, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-yez1rf-control-plane-wv5x7, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-yez1rf-control-plane-wv5x7, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-msgvz, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-lsq2n, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-5hzg2, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-yez1rf-control-plane-hlck6, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-yez1rf-control-plane-4lw44, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-5xg52, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-yez1rf-control-plane-4lw44, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-lsq2n, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-yez1rf-control-plane-wv5x7, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-qh6lq, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-yez1rf-control-plane-hlck6, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-yez1rf-control-plane-wv5x7, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-yez1rf-control-plane-4lw44, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-fljbx, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-swb9m, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-tkm5p, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-q7bld, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-yez1rf-control-plane-hlck6, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-2mp6t, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-yez1rf-control-plane-4lw44, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-d8wnn, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-wdcn5, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-45v2x, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-qh6lq, container calico-node-startup: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-pm3vhr
STEP: Redacting sensitive information from logs


• [SLOW TEST:2431.027 seconds]
... skipping 56 lines ...
STEP: Dumping logs from the "kcp-upgrade-8akitf" workload cluster
STEP: Dumping workload cluster kcp-upgrade-0hy0or/kcp-upgrade-8akitf logs
Oct 25 23:59:31.954: INFO: INFO: Collecting logs for node kcp-upgrade-8akitf-control-plane-xpmml in cluster kcp-upgrade-8akitf in namespace kcp-upgrade-0hy0or

Oct 26 00:01:42.913: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-8akitf-control-plane-xpmml

Failed to get logs for machine kcp-upgrade-8akitf-control-plane-nqxzm, cluster kcp-upgrade-0hy0or/kcp-upgrade-8akitf: dialing public load balancer at kcp-upgrade-8akitf-85c9c2e6.northeurope.cloudapp.azure.com: dial tcp 20.82.252.23:22: connect: connection timed out
Oct 26 00:01:44.382: INFO: INFO: Collecting logs for node kcp-upgrade-8akitf-md-0-5twrx in cluster kcp-upgrade-8akitf in namespace kcp-upgrade-0hy0or

Oct 26 00:03:53.989: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-8akitf-md-0-5twrx

Failed to get logs for machine kcp-upgrade-8akitf-md-0-55b8fbc9fd-fghph, cluster kcp-upgrade-0hy0or/kcp-upgrade-8akitf: dialing public load balancer at kcp-upgrade-8akitf-85c9c2e6.northeurope.cloudapp.azure.com: dial tcp 20.82.252.23:22: connect: connection timed out
Oct 26 00:03:55.406: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster kcp-upgrade-8akitf in namespace kcp-upgrade-0hy0or

Oct 26 00:10:27.205: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-8akitf-md-win-zfb4v

Failed to get logs for machine kcp-upgrade-8akitf-md-win-dfc4b5c5c-4z5r4, cluster kcp-upgrade-0hy0or/kcp-upgrade-8akitf: dialing public load balancer at kcp-upgrade-8akitf-85c9c2e6.northeurope.cloudapp.azure.com: dial tcp 20.82.252.23:22: connect: connection timed out
Oct 26 00:10:29.242: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster kcp-upgrade-8akitf in namespace kcp-upgrade-0hy0or

Oct 26 00:17:02.465: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-8akitf-md-win-vj795

Failed to get logs for machine kcp-upgrade-8akitf-md-win-dfc4b5c5c-vfr7z, cluster kcp-upgrade-0hy0or/kcp-upgrade-8akitf: dialing public load balancer at kcp-upgrade-8akitf-85c9c2e6.northeurope.cloudapp.azure.com: dial tcp 20.82.252.23:22: connect: connection timed out
STEP: Dumping workload cluster kcp-upgrade-0hy0or/kcp-upgrade-8akitf kube-system pod logs
STEP: Fetching kube-system pod logs took 1.023875564s
STEP: Dumping workload cluster kcp-upgrade-0hy0or/kcp-upgrade-8akitf Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-dnjdm, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-windows-62j8n, container calico-node-startup
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-7rvv6, container calico-kube-controllers
... skipping 8 lines ...
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-8akitf-control-plane-xpmml, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-27pf8, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-df5fk, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-h4k75, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-g86hv, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-8akitf-control-plane-xpmml, container kube-scheduler
STEP: Got error while iterating over activity logs for resource group capz-e2e-vyww0s: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001155282s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-0hy0or" namespace
STEP: Deleting cluster kcp-upgrade-0hy0or/kcp-upgrade-8akitf
STEP: Deleting cluster kcp-upgrade-8akitf
INFO: Waiting for the Cluster kcp-upgrade-0hy0or/kcp-upgrade-8akitf to be deleted
STEP: Waiting for cluster kcp-upgrade-8akitf to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-dnjdm, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-8akitf-control-plane-xpmml, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-8akitf-control-plane-xpmml, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-27pf8, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-p6xh6, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-df5fk, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-p6xh6, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-8akitf-control-plane-xpmml, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-7rvv6, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-8akitf-control-plane-xpmml, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-s78gm, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-cblzl, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-fpw5b, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-g86hv, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-h4k75, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-62j8n, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-62j8n, container calico-node-startup: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-0hy0or
STEP: Redacting sensitive information from logs


• [SLOW TEST:2556.614 seconds]
... skipping 91 lines ...
STEP: Dumping workload cluster kcp-upgrade-0t1etv/kcp-upgrade-zm1pop Azure activity log
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-zm1pop-control-plane-xfnkk, container kube-scheduler
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-r6x74, container coredns
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-zm1pop-control-plane-nklgb, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-zm1pop-control-plane-fhnfm, container etcd
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-stzdm, container coredns
STEP: Got error while iterating over activity logs for resource group capz-e2e-6fswim: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000275924s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-0t1etv" namespace
STEP: Deleting cluster kcp-upgrade-0t1etv/kcp-upgrade-zm1pop
STEP: Deleting cluster kcp-upgrade-zm1pop
INFO: Waiting for the Cluster kcp-upgrade-0t1etv/kcp-upgrade-zm1pop to be deleted
STEP: Waiting for cluster kcp-upgrade-zm1pop to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-xvj7r, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-zm1pop-control-plane-nklgb, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-6z8lf, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-stzdm, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-r6x74, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-zm1pop-control-plane-nklgb, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-zm1pop-control-plane-nklgb, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-lb8dr, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-zm1pop-control-plane-nklgb, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-qws95, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-sstxv, container calico-kube-controllers: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-0t1etv
STEP: Redacting sensitive information from logs


• [SLOW TEST:2357.562 seconds]
... skipping 7 lines ...
Running the Cluster API E2E tests Running the self-hosted spec 
  Should pivot the bootstrap cluster to a self-hosted cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:103

STEP: Creating namespace "self-hosted" for hosting the cluster
Oct 26 00:23:23.804: INFO: starting to create namespace for hosting the "self-hosted" test spec
2021/10/26 00:23:23 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found
INFO: Creating namespace self-hosted
INFO: Creating event watcher for namespace "self-hosted"
STEP: Creating a workload cluster
INFO: Creating the workload cluster with name "self-hosted-fs1pu9" using the "(default)" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
INFO: clusterctl config cluster self-hosted-fs1pu9 --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 1 --worker-machine-count 1 --flavor (default)
... skipping 42 lines ...
Oct 26 00:42:22.775: INFO: INFO: Collecting boot logs for AzureMachine self-hosted-fs1pu9-md-0-xtq75

Oct 26 00:42:23.146: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster self-hosted-fs1pu9 in namespace self-hosted

Oct 26 00:42:59.216: INFO: INFO: Collecting boot logs for AzureMachine self-hosted-fs1pu9-md-win-48m2v

Failed to get logs for machine self-hosted-fs1pu9-md-win-65f5c64b4c-4ngll, cluster self-hosted/self-hosted-fs1pu9: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Oct 26 00:42:59.712: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster self-hosted-fs1pu9 in namespace self-hosted

Oct 26 00:43:32.326: INFO: INFO: Collecting boot logs for AzureMachine self-hosted-fs1pu9-md-win-vdw5c

Failed to get logs for machine self-hosted-fs1pu9-md-win-65f5c64b4c-5rj7n, cluster self-hosted/self-hosted-fs1pu9: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster self-hosted/self-hosted-fs1pu9 kube-system pod logs
STEP: Fetching kube-system pod logs took 1.014879431s
STEP: Dumping workload cluster self-hosted/self-hosted-fs1pu9 Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-windows-2mkzx, container calico-node-startup
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-n77h4, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-zgkfs, container coredns
... skipping 14 lines ...
STEP: Fetching activity logs took 1.115802175s
STEP: Dumping all the Cluster API resources in the "self-hosted" namespace
STEP: Deleting all clusters in the self-hosted namespace
STEP: Deleting cluster self-hosted-fs1pu9
INFO: Waiting for the Cluster self-hosted/self-hosted-fs1pu9 to be deleted
STEP: Waiting for cluster self-hosted-fs1pu9 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vv6rk, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-sj4tn, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-8wtcc, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-cx6v4, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-2mkzx, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-sj4tn, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-f9t76, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-2mkzx, container calico-node-felix: http2: client connection lost
STEP: Deleting namespace used for hosting the "self-hosted" test spec
INFO: Deleting namespace self-hosted
STEP: Checking if any resources are left over in Azure for spec "self-hosted"
STEP: Redacting sensitive information from logs
STEP: Redacting sensitive information from logs

... skipping 3 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:40
  Running the self-hosted spec
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:159
    Should pivot the bootstrap cluster to a self-hosted cluster [It]
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:103

    failed to run clusterctl init
    Unexpected error:
        <*errors.errorString | 0xc00032f430>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred

... skipping 97 lines ...
Oct 26 00:36:01.908: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-nsde6n-md-0-0yqyt1-tntl8

Oct 26 00:36:02.391: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster md-rollout-nsde6n in namespace md-rollout-rpu1da

Oct 26 00:37:17.292: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-nsde6n-md-win-z76jb

Failed to get logs for machine md-rollout-nsde6n-md-win-56b898dff8-2k9xx, cluster md-rollout-rpu1da/md-rollout-nsde6n: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Oct 26 00:37:17.667: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster md-rollout-nsde6n in namespace md-rollout-rpu1da

Oct 26 00:38:12.411: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-nsde6n-md-win-slvsf

Failed to get logs for machine md-rollout-nsde6n-md-win-56b898dff8-ss8s5, cluster md-rollout-rpu1da/md-rollout-nsde6n: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Oct 26 00:38:12.788: INFO: INFO: Collecting logs for node 10.1.0.8 in cluster md-rollout-nsde6n in namespace md-rollout-rpu1da

Oct 26 00:38:41.524: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-nsde6n-md-win-rvnyl0-8kzmz

Failed to get logs for machine md-rollout-nsde6n-md-win-d96577ccd-vt7mv, cluster md-rollout-rpu1da/md-rollout-nsde6n: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster md-rollout-rpu1da/md-rollout-nsde6n kube-system pod logs
STEP: Fetching kube-system pod logs took 1.041241911s
STEP: Creating log watcher for controller kube-system/calico-node-fwmmj, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-d9nps, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-apiserver-md-rollout-nsde6n-control-plane-ptc57, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-controller-manager-md-rollout-nsde6n-control-plane-ptc57, container kube-controller-manager
... skipping 17 lines ...
STEP: Fetching activity logs took 963.000239ms
STEP: Dumping all the Cluster API resources in the "md-rollout-rpu1da" namespace
STEP: Deleting cluster md-rollout-rpu1da/md-rollout-nsde6n
STEP: Deleting cluster md-rollout-nsde6n
INFO: Waiting for the Cluster md-rollout-rpu1da/md-rollout-nsde6n to be deleted
STEP: Waiting for cluster md-rollout-nsde6n to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-9qvsl, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-qncc9, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-md-rollout-nsde6n-control-plane-ptc57, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-fznqw, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-md-rollout-nsde6n-control-plane-ptc57, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-v4t5t, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-rollout-nsde6n-control-plane-ptc57, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-k25ls, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-69r4w, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-jmtsf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-rollout-nsde6n-control-plane-ptc57, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-5vmdj, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-k25ls, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-95cbg, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-v4t5t, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-fwmmj, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-d9nps, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-rollout" test spec
INFO: Deleting namespace md-rollout-rpu1da
STEP: Redacting sensitive information from logs


• [SLOW TEST:2097.853 seconds]
... skipping 58 lines ...
STEP: Fetching activity logs took 529.74224ms
STEP: Dumping all the Cluster API resources in the "kcp-adoption-r01vu6" namespace
STEP: Deleting cluster kcp-adoption-r01vu6/kcp-adoption-cmemb3
STEP: Deleting cluster kcp-adoption-cmemb3
INFO: Waiting for the Cluster kcp-adoption-r01vu6/kcp-adoption-cmemb3 to be deleted
STEP: Waiting for cluster kcp-adoption-cmemb3 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-lnkrf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-adoption-cmemb3-control-plane-0, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-hm46s, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-kmhrd, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-adoption-cmemb3-control-plane-0, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-adoption-cmemb3-control-plane-0, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-adoption-cmemb3-control-plane-0, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-xbk8s, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-25tbh, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-adoption" test spec
INFO: Deleting namespace kcp-adoption-r01vu6
STEP: Redacting sensitive information from logs


• [SLOW TEST:788.487 seconds]
... skipping 182 lines ...
STEP: Fetching activity logs took 953.444889ms
STEP: Dumping all the Cluster API resources in the "mhc-remediation-xmqwwl" namespace
STEP: Deleting cluster mhc-remediation-xmqwwl/mhc-remediation-fe61m6
STEP: Deleting cluster mhc-remediation-fe61m6
INFO: Waiting for the Cluster mhc-remediation-xmqwwl/mhc-remediation-fe61m6 to be deleted
STEP: Waiting for cluster mhc-remediation-fe61m6 to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-fe61m6-control-plane-5rgbk, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-fe61m6-control-plane-5rgbk, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-fe61m6-control-plane-5rgbk, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rk772, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-fe61m6-control-plane-5rgbk, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-4p447, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
INFO: Deleting namespace mhc-remediation-xmqwwl
STEP: Redacting sensitive information from logs


• [SLOW TEST:2399.078 seconds]
... skipping 60 lines ...
Oct 26 01:34:00.156: INFO: INFO: Collecting boot logs for AzureMachine machine-pool-7d8uea-control-plane-k58j2

Oct 26 01:34:01.365: INFO: INFO: Collecting logs for node win-p-win000002 in cluster machine-pool-7d8uea in namespace machine-pool-hivdpa

Oct 26 01:34:12.710: INFO: INFO: Collecting boot logs for VMSS instance 2 of scale set machine-pool-7d8uea-mp-0

Failed to get logs for machine pool machine-pool-7d8uea-mp-0, cluster machine-pool-hivdpa/machine-pool-7d8uea: [running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1]
Oct 26 01:34:13.243: INFO: INFO: Collecting logs for node win-p-win000002 in cluster machine-pool-7d8uea in namespace machine-pool-hivdpa

Oct 26 01:34:45.900: INFO: INFO: Collecting boot logs for VMSS instance 2 of scale set win-p-win

Failed to get logs for machine pool machine-pool-7d8uea-mp-win, cluster machine-pool-hivdpa/machine-pool-7d8uea: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster machine-pool-hivdpa/machine-pool-7d8uea kube-system pod logs
STEP: Fetching kube-system pod logs took 1.044338323s
STEP: Creating log watcher for controller kube-system/calico-node-hcwlx, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-z7zp8, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-pcqvb, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-4spjn, container kube-proxy
... skipping 15 lines ...
STEP: Fetching activity logs took 607.16814ms
STEP: Dumping all the Cluster API resources in the "machine-pool-hivdpa" namespace
STEP: Deleting cluster machine-pool-hivdpa/machine-pool-7d8uea
STEP: Deleting cluster machine-pool-7d8uea
INFO: Waiting for the Cluster machine-pool-hivdpa/machine-pool-7d8uea to be deleted
STEP: Waiting for cluster machine-pool-7d8uea to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-dgbtt, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-8d569, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-machine-pool-7d8uea-control-plane-k58j2, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-cf9tw, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-65xjd, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-machine-pool-7d8uea-control-plane-k58j2, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-machine-pool-7d8uea-control-plane-k58j2, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-ktfgk, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-qjb5t, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-dgbtt, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-machine-pool-7d8uea-control-plane-k58j2, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-z7zp8, container kube-proxy: http2: client connection lost
STEP: Error starting logs stream for pod kube-system/kube-proxy-b7rsr, container kube-proxy: Get "https://machine-pool-7d8uea-3eb19765.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/kube-system/pods/kube-proxy-b7rsr/log?container=kube-proxy&follow=true": http2: client connection lost
STEP: Error starting logs stream for pod kube-system/calico-node-w4vph, container calico-node: Get "https://machine-pool-7d8uea-3eb19765.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/kube-system/pods/calico-node-w4vph/log?container=calico-node&follow=true": http2: client connection lost
STEP: Error starting logs stream for pod kube-system/kube-proxy-pcqvb, container kube-proxy: Get "https://machine-pool-7d8uea-3eb19765.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/kube-system/pods/kube-proxy-pcqvb/log?container=kube-proxy&follow=true": http2: client connection lost
STEP: Error starting logs stream for pod kube-system/calico-node-hcwlx, container calico-node: Get "https://machine-pool-7d8uea-3eb19765.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/kube-system/pods/calico-node-hcwlx/log?container=calico-node&follow=true": http2: client connection lost
STEP: Deleting namespace used for hosting the "machine-pool" test spec
INFO: Deleting namespace machine-pool-hivdpa
STEP: Redacting sensitive information from logs


• [SLOW TEST:1974.820 seconds]
... skipping 61 lines ...
Oct 26 01:33:12.725: INFO: INFO: Collecting boot logs for AzureMachine md-scale-chv7cs-md-0-6tflv

Oct 26 01:33:13.135: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster md-scale-chv7cs in namespace md-scale-6dwdvz

Oct 26 01:34:51.770: INFO: INFO: Collecting boot logs for AzureMachine md-scale-chv7cs-md-win-4qw6l

Failed to get logs for machine md-scale-chv7cs-md-win-5cbc69d497-5pxzx, cluster md-scale-6dwdvz/md-scale-chv7cs: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Oct 26 01:34:52.891: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster md-scale-chv7cs in namespace md-scale-6dwdvz

Oct 26 01:35:33.083: INFO: INFO: Collecting boot logs for AzureMachine md-scale-chv7cs-md-win-ct9js

Failed to get logs for machine md-scale-chv7cs-md-win-5cbc69d497-dmzqm, cluster md-scale-6dwdvz/md-scale-chv7cs: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster md-scale-6dwdvz/md-scale-chv7cs kube-system pod logs
STEP: Fetching kube-system pod logs took 1.042809769s
STEP: Dumping workload cluster md-scale-6dwdvz/md-scale-chv7cs Azure activity log
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-725w2, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/calico-node-windows-725ck, container calico-node-felix
STEP: Creating log watcher for controller kube-system/kube-controller-manager-md-scale-chv7cs-control-plane-5p275, container kube-controller-manager
... skipping 14 lines ...
STEP: Fetching activity logs took 985.422706ms
STEP: Dumping all the Cluster API resources in the "md-scale-6dwdvz" namespace
STEP: Deleting cluster md-scale-6dwdvz/md-scale-chv7cs
STEP: Deleting cluster md-scale-chv7cs
INFO: Waiting for the Cluster md-scale-6dwdvz/md-scale-chv7cs to be deleted
STEP: Waiting for cluster md-scale-chv7cs to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-zr4z9, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-725ck, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-8w8q8, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-scale-chv7cs-control-plane-5p275, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-scale-chv7cs-control-plane-5p275, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-dfp7r, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-725w2, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-zr4z9, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-md-scale-chv7cs-control-plane-5p275, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-zpnm4, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-8blmw, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-md-scale-chv7cs-control-plane-5p275, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-tpj4s, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-725ck, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-kr2j7, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-scrkt, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-cfc9l, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-scale" test spec
INFO: Deleting namespace md-scale-6dwdvz
STEP: Redacting sensitive information from logs


• [SLOW TEST:1888.393 seconds]
... skipping 56 lines ...
STEP: Dumping logs from the "node-drain-5hv3sc" workload cluster
STEP: Dumping workload cluster node-drain-nvrb4o/node-drain-5hv3sc logs
Oct 26 01:55:48.754: INFO: INFO: Collecting logs for node node-drain-5hv3sc-control-plane-t86hj in cluster node-drain-5hv3sc in namespace node-drain-nvrb4o

Oct 26 01:57:58.405: INFO: INFO: Collecting boot logs for AzureMachine node-drain-5hv3sc-control-plane-t86hj

Failed to get logs for machine node-drain-5hv3sc-control-plane-vxkdk, cluster node-drain-nvrb4o/node-drain-5hv3sc: dialing public load balancer at node-drain-5hv3sc-cc919d9.northeurope.cloudapp.azure.com: dial tcp 20.93.50.234:22: connect: connection timed out
STEP: Dumping workload cluster node-drain-nvrb4o/node-drain-5hv3sc kube-system pod logs
STEP: Fetching kube-system pod logs took 937.908314ms
STEP: Dumping workload cluster node-drain-nvrb4o/node-drain-5hv3sc Azure activity log
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-rx7v8, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-apiserver-node-drain-5hv3sc-control-plane-t86hj, container kube-apiserver
STEP: Creating log watcher for controller kube-system/calico-node-b9cgw, container calico-node
STEP: Creating log watcher for controller kube-system/etcd-node-drain-5hv3sc-control-plane-t86hj, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-j2gm5, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-8llq2, container coredns
STEP: Creating log watcher for controller kube-system/kube-scheduler-node-drain-5hv3sc-control-plane-t86hj, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-controller-manager-node-drain-5hv3sc-control-plane-t86hj, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-96vm2, container coredns
STEP: Got error while iterating over activity logs for resource group capz-e2e-efnyca: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000199724s
STEP: Dumping all the Cluster API resources in the "node-drain-nvrb4o" namespace
STEP: Deleting cluster node-drain-nvrb4o/node-drain-5hv3sc
STEP: Deleting cluster node-drain-5hv3sc
INFO: Waiting for the Cluster node-drain-nvrb4o/node-drain-5hv3sc to be deleted
STEP: Waiting for cluster node-drain-5hv3sc to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-rx7v8, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-node-drain-5hv3sc-control-plane-t86hj, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-node-drain-5hv3sc-control-plane-t86hj, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-b9cgw, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-j2gm5, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-96vm2, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-node-drain-5hv3sc-control-plane-t86hj, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-8llq2, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-node-drain-5hv3sc-control-plane-t86hj, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "node-drain" test spec
INFO: Deleting namespace node-drain-nvrb4o
STEP: Redacting sensitive information from logs


• [SLOW TEST:2070.210 seconds]
... skipping 7 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Running the Cluster API E2E tests Running the self-hosted spec [It] Should pivot the bootstrap cluster to a self-hosted cluster 
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/framework/clusterctl/client.go:85

Ran 12 of 22 Specs in 8830.111 seconds
FAIL! -- 11 Passed | 1 Failed | 0 Pending | 10 Skipped


Ginkgo ran 1 suite in 2h28m40.72354522s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
  - For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta
  - To comment, chime in at https://github.com/onsi/ginkgo/issues/711

To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...