This job view page is being replaced by Spyglass soon. Check out the new job view.
PRjsturtevant: Fix Capi e2e tests by upgrading from 1.22
ResultFAILURE
Tests 1 failed / 11 succeeded
Started2021-10-27 06:09
Elapsed2h9m
Revisionfb0df74441b73416f277896ee46fb962feebc2eb
Refs 1792

Test Failures


capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster 30m29s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sRunning\sthe\sCluster\sAPI\sE2E\stests\sRunning\sthe\sself\-hosted\sspec\sShould\spivot\sthe\sbootstrap\scluster\sto\sa\sself\-hosted\scluster$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:107
Timed out after 1200.000s.
Expected
    <string>: Provisioning
to equal
    <string>: Provisioned
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/framework/cluster_helpers.go:134
				
				Click to see stdout/stderrfrom junit.e2e_suite.2.xml

Filter through log files | View test history on testgrid


Show 11 Passed Tests

Show 10 Skipped Tests

Error lines from build-log.txt

... skipping 472 lines ...
Oct 27 06:24:41.247: INFO: INFO: Collecting boot logs for AzureMachine quick-start-gqpg42-md-0-z622s

Oct 27 06:24:41.682: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster quick-start-gqpg42 in namespace quick-start-2dprdy

Oct 27 06:25:13.404: INFO: INFO: Collecting boot logs for AzureMachine quick-start-gqpg42-md-win-rrzsg

Failed to get logs for machine quick-start-gqpg42-md-win-666796868-48bwp, cluster quick-start-2dprdy/quick-start-gqpg42: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Oct 27 06:25:13.736: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster quick-start-gqpg42 in namespace quick-start-2dprdy

Oct 27 06:26:10.082: INFO: INFO: Collecting boot logs for AzureMachine quick-start-gqpg42-md-win-ldmwl

Failed to get logs for machine quick-start-gqpg42-md-win-666796868-755k2, cluster quick-start-2dprdy/quick-start-gqpg42: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster quick-start-2dprdy/quick-start-gqpg42 kube-system pod logs
STEP: Fetching kube-system pod logs took 576.269788ms
STEP: Dumping workload cluster quick-start-2dprdy/quick-start-gqpg42 Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-skwmd, container coredns
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-tmt4r, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-9m6ff, container kube-proxy
... skipping 14 lines ...
STEP: Fetching activity logs took 542.192186ms
STEP: Dumping all the Cluster API resources in the "quick-start-2dprdy" namespace
STEP: Deleting cluster quick-start-2dprdy/quick-start-gqpg42
STEP: Deleting cluster quick-start-gqpg42
INFO: Waiting for the Cluster quick-start-2dprdy/quick-start-gqpg42 to be deleted
STEP: Waiting for cluster quick-start-gqpg42 to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-nxr8q, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-nxr8q, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-c6sf5, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-c6sf5, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-ds5hr, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-llqwd, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "quick-start" test spec
INFO: Deleting namespace quick-start-2dprdy
STEP: Redacting sensitive information from logs


• [SLOW TEST:1239.192 seconds]
... skipping 74 lines ...
Oct 27 06:44:53.417: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-m1ahep-md-0-kfh2r

Oct 27 06:44:53.698: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster kcp-upgrade-m1ahep in namespace kcp-upgrade-m79bgd

Oct 27 06:45:23.026: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-m1ahep-md-win-7n97j

Failed to get logs for machine kcp-upgrade-m1ahep-md-win-6b8665974-6s6cz, cluster kcp-upgrade-m79bgd/kcp-upgrade-m1ahep: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Oct 27 06:45:23.514: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster kcp-upgrade-m1ahep in namespace kcp-upgrade-m79bgd

Oct 27 06:45:59.596: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-m1ahep-md-win-c77zh

Failed to get logs for machine kcp-upgrade-m1ahep-md-win-6b8665974-g94sf, cluster kcp-upgrade-m79bgd/kcp-upgrade-m1ahep: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster kcp-upgrade-m79bgd/kcp-upgrade-m1ahep kube-system pod logs
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-g4x4z, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-m1ahep-control-plane-hxcpb, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-node-fstsr, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-ztvmp, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-m1ahep-control-plane-f649j, container kube-apiserver
... skipping 20 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-windows-5fvs8, container calico-node-felix
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-m1ahep-control-plane-qz8dd, container etcd
STEP: Creating log watcher for controller kube-system/calico-node-rp7cc, container calico-node
STEP: Fetching kube-system pod logs took 502.141629ms
STEP: Creating log watcher for controller kube-system/calico-node-windows-5fvs8, container calico-node-startup
STEP: Dumping workload cluster kcp-upgrade-m79bgd/kcp-upgrade-m1ahep Azure activity log
STEP: Got error while iterating over activity logs for resource group capz-e2e-nyngjl: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000539532s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-m79bgd" namespace
STEP: Deleting cluster kcp-upgrade-m79bgd/kcp-upgrade-m1ahep
STEP: Deleting cluster kcp-upgrade-m1ahep
INFO: Waiting for the Cluster kcp-upgrade-m79bgd/kcp-upgrade-m1ahep to be deleted
STEP: Waiting for cluster kcp-upgrade-m1ahep to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-m1ahep-control-plane-hxcpb, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-g4x4z, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-ztvmp, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-5fvs8, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-gj9cg, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-hbzwc, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-5fvs8, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vv9tr, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-6kplt, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-msjrf, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-wwqvs, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-rp7cc, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-gj9cg, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-m1ahep-control-plane-hxcpb, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-m1ahep-control-plane-f649j, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-m1ahep-control-plane-hxcpb, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-7tcfp, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-m1ahep-control-plane-f649j, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-m1ahep-control-plane-f649j, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-tvq6c, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-m1ahep-control-plane-hxcpb, container etcd: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-m79bgd
STEP: Redacting sensitive information from logs


• [SLOW TEST:2179.311 seconds]
... skipping 56 lines ...
STEP: Dumping logs from the "kcp-upgrade-thl25p" workload cluster
STEP: Dumping workload cluster kcp-upgrade-prdfzc/kcp-upgrade-thl25p logs
Oct 27 06:31:17.981: INFO: INFO: Collecting logs for node kcp-upgrade-thl25p-control-plane-8n8r8 in cluster kcp-upgrade-thl25p in namespace kcp-upgrade-prdfzc

Oct 27 06:33:28.452: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-thl25p-control-plane-8n8r8

Failed to get logs for machine kcp-upgrade-thl25p-control-plane-sp4dz, cluster kcp-upgrade-prdfzc/kcp-upgrade-thl25p: dialing public load balancer at kcp-upgrade-thl25p-6eb246a1.westus2.cloudapp.azure.com: dial tcp 20.69.133.172:22: connect: connection timed out
Oct 27 06:33:29.521: INFO: INFO: Collecting logs for node kcp-upgrade-thl25p-md-0-kns7d in cluster kcp-upgrade-thl25p in namespace kcp-upgrade-prdfzc

Oct 27 06:35:39.524: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-thl25p-md-0-kns7d

Failed to get logs for machine kcp-upgrade-thl25p-md-0-66b9685c88-k8dt4, cluster kcp-upgrade-prdfzc/kcp-upgrade-thl25p: dialing public load balancer at kcp-upgrade-thl25p-6eb246a1.westus2.cloudapp.azure.com: dial tcp 20.69.133.172:22: connect: connection timed out
Oct 27 06:35:40.502: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster kcp-upgrade-thl25p in namespace kcp-upgrade-prdfzc

Oct 27 06:42:12.736: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-thl25p-md-win-vs26t

Failed to get logs for machine kcp-upgrade-thl25p-md-win-5d79dc9db7-c8xls, cluster kcp-upgrade-prdfzc/kcp-upgrade-thl25p: dialing public load balancer at kcp-upgrade-thl25p-6eb246a1.westus2.cloudapp.azure.com: dial tcp 20.69.133.172:22: connect: connection timed out
Oct 27 06:42:13.591: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster kcp-upgrade-thl25p in namespace kcp-upgrade-prdfzc

Oct 27 06:48:45.952: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-thl25p-md-win-b9f5q

Failed to get logs for machine kcp-upgrade-thl25p-md-win-5d79dc9db7-dkmj7, cluster kcp-upgrade-prdfzc/kcp-upgrade-thl25p: dialing public load balancer at kcp-upgrade-thl25p-6eb246a1.westus2.cloudapp.azure.com: dial tcp 20.69.133.172:22: connect: connection timed out
STEP: Dumping workload cluster kcp-upgrade-prdfzc/kcp-upgrade-thl25p kube-system pod logs
STEP: Fetching kube-system pod logs took 605.052965ms
STEP: Dumping workload cluster kcp-upgrade-prdfzc/kcp-upgrade-thl25p Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-kdzkc, container calico-node
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-thl25p-control-plane-8n8r8, container kube-apiserver
STEP: Creating log watcher for controller kube-system/calico-node-windows-88jbg, container calico-node-felix
... skipping 8 lines ...
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-v2787, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/calico-node-jcmvt, container calico-node
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-thl25p-control-plane-8n8r8, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-czcnn, container kube-proxy
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-thl25p-control-plane-8n8r8, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-dntv9, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-qolq8j: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.00104599s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-prdfzc" namespace
STEP: Deleting cluster kcp-upgrade-prdfzc/kcp-upgrade-thl25p
STEP: Deleting cluster kcp-upgrade-thl25p
INFO: Waiting for the Cluster kcp-upgrade-prdfzc/kcp-upgrade-thl25p to be deleted
STEP: Waiting for cluster kcp-upgrade-thl25p to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-jcmvt, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-v2787, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-kqplb, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-r8dqd, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-88jbg, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-thl25p-control-plane-8n8r8, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-d56jg, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-4tfzw, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-dfp7v, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-thl25p-control-plane-8n8r8, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-88jbg, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-thl25p-control-plane-8n8r8, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-czcnn, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-kdzkc, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-r8dqd, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-thl25p-control-plane-8n8r8, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-dntv9, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-prdfzc
STEP: Redacting sensitive information from logs


• [SLOW TEST:2525.309 seconds]
... skipping 91 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-22znb, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-k82evh-control-plane-lzcls, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-k82evh-control-plane-rmc6l, container kube-scheduler
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-k82evh-control-plane-5vs2f, container etcd
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-k82evh-control-plane-rmc6l, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-node-pzv5b, container calico-node
STEP: Got error while iterating over activity logs for resource group capz-e2e-ly538g: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001250742s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-7cwikm" namespace
STEP: Deleting cluster kcp-upgrade-7cwikm/kcp-upgrade-k82evh
STEP: Deleting cluster kcp-upgrade-k82evh
INFO: Waiting for the Cluster kcp-upgrade-7cwikm/kcp-upgrade-k82evh to be deleted
STEP: Waiting for cluster kcp-upgrade-k82evh to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-pzv5b, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-k82evh-control-plane-rmc6l, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-22znb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-k82evh-control-plane-5vs2f, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-k82evh-control-plane-rmc6l, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-k82evh-control-plane-lzcls, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-7vvdr, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-j687q, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-xdkvk, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-fnhsz, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-k82evh-control-plane-lzcls, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-k82evh-control-plane-5vs2f, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-k82evh-control-plane-rmc6l, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-fbmj2, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-k82evh-control-plane-lzcls, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-k82evh-control-plane-lzcls, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-k82evh-control-plane-5vs2f, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-mndln, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-k82evh-control-plane-rmc6l, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-ghwhb, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-k82evh-control-plane-5vs2f, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-n9gms, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-kwjf2, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-7cwikm
STEP: Redacting sensitive information from logs


• [SLOW TEST:2009.884 seconds]
... skipping 66 lines ...
Oct 27 07:08:12.047: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-03831t-md-0-rwi1l9-76w2l

Oct 27 07:08:12.377: INFO: INFO: Collecting logs for node 10.1.0.8 in cluster md-rollout-03831t in namespace md-rollout-ddfy1i

Oct 27 07:08:38.189: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-03831t-md-win-9ah7si-znj7c

Failed to get logs for machine md-rollout-03831t-md-win-6d5f994587-26pfl, cluster md-rollout-ddfy1i/md-rollout-03831t: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Oct 27 07:08:38.523: INFO: INFO: Collecting logs for node 10.1.0.4 in cluster md-rollout-03831t in namespace md-rollout-ddfy1i

Oct 27 07:11:21.344: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-03831t-md-win-6wxmg

Failed to get logs for machine md-rollout-03831t-md-win-bcd885f79-bjsgj, cluster md-rollout-ddfy1i/md-rollout-03831t: [[dialing from control plane to target node at 10.1.0.4: ssh: rejected: connect failed (Connection timed out), [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]], failed to get boot diagnostics data: compute.VirtualMachinesClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ResourceNotFound" Message="The Resource 'Microsoft.Compute/virtualMachines/md-rollou-6wxmg' under resource group 'capz-e2e-7db7y1' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"]
Oct 27 07:11:21.911: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster md-rollout-03831t in namespace md-rollout-ddfy1i

Oct 27 07:11:54.178: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-03831t-md-win-9ttkv

Failed to get logs for machine md-rollout-03831t-md-win-bcd885f79-qnlp5, cluster md-rollout-ddfy1i/md-rollout-03831t: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster md-rollout-ddfy1i/md-rollout-03831t kube-system pod logs
STEP: Creating log watcher for controller kube-system/calico-node-fvbk5, container calico-node
STEP: Creating log watcher for controller kube-system/etcd-md-rollout-03831t-control-plane-ccp4m, container etcd
STEP: Creating log watcher for controller kube-system/calico-node-windows-qcfr6, container calico-node-startup
STEP: Creating log watcher for controller kube-system/kube-proxy-kcg7b, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-md-rollout-03831t-control-plane-ccp4m, container kube-controller-manager
... skipping 11 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-windows-hh9wx, container calico-node-felix
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-ckcqh, container kube-proxy
STEP: Dumping workload cluster md-rollout-ddfy1i/md-rollout-03831t Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-r77rq, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-windows-klpvp, container calico-node-startup
STEP: Creating log watcher for controller kube-system/calico-node-nvwn5, container calico-node
STEP: Error starting logs stream for pod kube-system/kube-proxy-windows-lvqk6, container kube-proxy: container "kube-proxy" in pod "kube-proxy-windows-lvqk6" is waiting to start: ContainerCreating
STEP: Error starting logs stream for pod kube-system/calico-node-windows-klpvp, container calico-node-felix: container "calico-node-felix" in pod "calico-node-windows-klpvp" is waiting to start: PodInitializing
STEP: Error starting logs stream for pod kube-system/calico-node-windows-klpvp, container calico-node-startup: container "calico-node-startup" in pod "calico-node-windows-klpvp" is waiting to start: PodInitializing
STEP: Fetching activity logs took 1.245998742s
STEP: Dumping all the Cluster API resources in the "md-rollout-ddfy1i" namespace
STEP: Deleting cluster md-rollout-ddfy1i/md-rollout-03831t
STEP: Deleting cluster md-rollout-03831t
INFO: Waiting for the Cluster md-rollout-ddfy1i/md-rollout-03831t to be deleted
STEP: Waiting for cluster md-rollout-03831t to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-rollout-03831t-control-plane-ccp4m, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-ckcqh, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-nj57b, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-qcfr6, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-rollout-03831t-control-plane-ccp4m, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-md-rollout-03831t-control-plane-ccp4m, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-qcfr6, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-hh9wx, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-hh9wx, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-k2l9p, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-kcg7b, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-k7bnd, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-r77rq, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-fvbk5, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-md-rollout-03831t-control-plane-ccp4m, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-nvwn5, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-rls2z, container calico-kube-controllers: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-rollout" test spec
INFO: Deleting namespace md-rollout-ddfy1i
STEP: Redacting sensitive information from logs


• [SLOW TEST:1699.048 seconds]
... skipping 7 lines ...
Running the Cluster API E2E tests Running the self-hosted spec 
  Should pivot the bootstrap cluster to a self-hosted cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:107

STEP: Creating namespace "self-hosted" for hosting the cluster
Oct 27 06:58:19.011: INFO: starting to create namespace for hosting the "self-hosted" test spec
2021/10/27 06:58:19 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found
INFO: Creating namespace self-hosted
INFO: Creating event watcher for namespace "self-hosted"
STEP: Creating a workload cluster
INFO: Creating the workload cluster with name "self-hosted-ndp2bf" using the "management" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
INFO: clusterctl config cluster self-hosted-ndp2bf --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 1 --worker-machine-count 1 --flavor management
... skipping 147 lines ...
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-adoption-3frxsn-control-plane-0, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-node-sjzg9, container calico-node
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-adoption-3frxsn-control-plane-0, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-nxld5, container kube-proxy
STEP: Creating log watcher for controller kube-system/etcd-kcp-adoption-3frxsn-control-plane-0, container etcd
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-adoption-3frxsn-control-plane-0, container kube-apiserver
STEP: Error starting logs stream for pod kube-system/coredns-78fcd69978-txfjm, container coredns: container "coredns" in pod "coredns-78fcd69978-txfjm" is waiting to start: ContainerCreating
STEP: Error starting logs stream for pod kube-system/calico-kube-controllers-846b5f484d-d5twm, container calico-kube-controllers: container "calico-kube-controllers" in pod "calico-kube-controllers-846b5f484d-d5twm" is waiting to start: ContainerCreating
STEP: Fetching activity logs took 484.816855ms
STEP: Dumping all the Cluster API resources in the "kcp-adoption-glaval" namespace
STEP: Deleting cluster kcp-adoption-glaval/kcp-adoption-3frxsn
STEP: Deleting cluster kcp-adoption-3frxsn
INFO: Waiting for the Cluster kcp-adoption-glaval/kcp-adoption-3frxsn to be deleted
STEP: Waiting for cluster kcp-adoption-3frxsn to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-adoption-3frxsn-control-plane-0, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-adoption-3frxsn-control-plane-0, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-adoption-3frxsn-control-plane-0, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-sjzg9, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-adoption-3frxsn-control-plane-0, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-nxld5, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-rmx77, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-adoption" test spec
INFO: Deleting namespace kcp-adoption-glaval
STEP: Redacting sensitive information from logs


• [SLOW TEST:587.448 seconds]
... skipping 182 lines ...
STEP: Fetching activity logs took 1.000278597s
STEP: Dumping all the Cluster API resources in the "mhc-remediation-8zbyln" namespace
STEP: Deleting cluster mhc-remediation-8zbyln/mhc-remediation-k2hayx
STEP: Deleting cluster mhc-remediation-k2hayx
INFO: Waiting for the Cluster mhc-remediation-8zbyln/mhc-remediation-k2hayx to be deleted
STEP: Waiting for cluster mhc-remediation-k2hayx to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-c6h8k, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-k2hayx-control-plane-2hwj6, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-5nqd9, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-k2hayx-control-plane-2hwj6, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-8z4ld, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-k2hayx-control-plane-2hwj6, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-lftgg, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-mcw4h, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-k2hayx-control-plane-2mml9, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-k2hayx-control-plane-2mml9, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-k2hayx-control-plane-2hwj6, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-k2hayx-control-plane-2mml9, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-2crsd, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-km8js, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-k2hayx-control-plane-2mml9, container etcd: http2: client connection lost
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
INFO: Deleting namespace mhc-remediation-8zbyln
STEP: Redacting sensitive information from logs


• [SLOW TEST:1506.715 seconds]
... skipping 61 lines ...
Oct 27 07:59:34.706: INFO: INFO: Collecting boot logs for AzureMachine md-scale-z5299n-md-0-jtn9k

Oct 27 07:59:35.289: INFO: INFO: Collecting logs for node 10.1.0.6 in cluster md-scale-z5299n in namespace md-scale-m0d6o8

Oct 27 08:00:29.986: INFO: INFO: Collecting boot logs for AzureMachine md-scale-z5299n-md-win-rj8nk

Failed to get logs for machine md-scale-z5299n-md-win-857bf49b64-k27zw, cluster md-scale-m0d6o8/md-scale-z5299n: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Oct 27 08:00:30.299: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster md-scale-z5299n in namespace md-scale-m0d6o8

Oct 27 08:01:47.465: INFO: INFO: Collecting boot logs for AzureMachine md-scale-z5299n-md-win-5rvgl

Failed to get logs for machine md-scale-z5299n-md-win-857bf49b64-mj4kg, cluster md-scale-m0d6o8/md-scale-z5299n: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster md-scale-m0d6o8/md-scale-z5299n kube-system pod logs
STEP: Fetching kube-system pod logs took 564.861937ms
STEP: Dumping workload cluster md-scale-m0d6o8/md-scale-z5299n Azure activity log
STEP: Creating log watcher for controller kube-system/kube-controller-manager-md-scale-z5299n-control-plane-8pv77, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-zh4lx, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-proxy-4d5n9, container kube-proxy
... skipping 14 lines ...
STEP: Fetching activity logs took 1.043853311s
STEP: Dumping all the Cluster API resources in the "md-scale-m0d6o8" namespace
STEP: Deleting cluster md-scale-m0d6o8/md-scale-z5299n
STEP: Deleting cluster md-scale-z5299n
INFO: Waiting for the Cluster md-scale-m0d6o8/md-scale-z5299n to be deleted
STEP: Waiting for cluster md-scale-z5299n to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-4xcmw, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-md-scale-z5299n-control-plane-8pv77, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-rzmrr, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-scale-z5299n-control-plane-8pv77, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-4d5n9, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-nh6lq, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-zh4lx, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-kwtjw, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-vzhn6, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-kbxgj, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-q9gsh, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-wm9ms, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-prgtf, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-vzhn6, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-wm9ms, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-md-scale-z5299n-control-plane-8pv77, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-scale-z5299n-control-plane-8pv77, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-scale" test spec
INFO: Deleting namespace md-scale-m0d6o8
STEP: Redacting sensitive information from logs


• [SLOW TEST:1860.151 seconds]
... skipping 62 lines ...
Oct 27 08:02:33.554: INFO: INFO: Collecting boot logs for AzureMachine machine-pool-1qb1u8-control-plane-7qpwn

Oct 27 08:02:34.557: INFO: INFO: Collecting logs for node win-p-win000002 in cluster machine-pool-1qb1u8 in namespace machine-pool-a96usw

Oct 27 08:03:08.005: INFO: INFO: Collecting boot logs for VMSS instance 2 of scale set machine-pool-1qb1u8-mp-0

Failed to get logs for machine pool machine-pool-1qb1u8-mp-0, cluster machine-pool-a96usw/machine-pool-1qb1u8: [running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1]
Oct 27 08:03:08.459: INFO: INFO: Collecting logs for node win-p-win000002 in cluster machine-pool-1qb1u8 in namespace machine-pool-a96usw

Oct 27 08:04:20.497: INFO: INFO: Collecting boot logs for VMSS instance 2 of scale set win-p-win

Failed to get logs for machine pool machine-pool-1qb1u8-mp-win, cluster machine-pool-a96usw/machine-pool-1qb1u8: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster machine-pool-a96usw/machine-pool-1qb1u8 kube-system pod logs
STEP: Fetching kube-system pod logs took 572.1078ms
STEP: Dumping workload cluster machine-pool-a96usw/machine-pool-1qb1u8 Azure activity log
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-nfvhd, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/etcd-machine-pool-1qb1u8-control-plane-7qpwn, container etcd
STEP: Creating log watcher for controller kube-system/kube-controller-manager-machine-pool-1qb1u8-control-plane-7qpwn, container kube-controller-manager
... skipping 11 lines ...
STEP: Fetching activity logs took 591.092029ms
STEP: Dumping all the Cluster API resources in the "machine-pool-a96usw" namespace
STEP: Deleting cluster machine-pool-a96usw/machine-pool-1qb1u8
STEP: Deleting cluster machine-pool-1qb1u8
INFO: Waiting for the Cluster machine-pool-a96usw/machine-pool-1qb1u8 to be deleted
STEP: Waiting for cluster machine-pool-1qb1u8 to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-machine-pool-1qb1u8-control-plane-7qpwn, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-zd2ps, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-machine-pool-1qb1u8-control-plane-7qpwn, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-kmn7r, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-xg9kb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-bc2sj, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-245gq, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-machine-pool-1qb1u8-control-plane-7qpwn, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-nfvhd, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-lrkrl, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-machine-pool-1qb1u8-control-plane-7qpwn, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-9t4z9, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-f8qbp, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-bc2sj, container calico-node-felix: http2: client connection lost
STEP: Deleting namespace used for hosting the "machine-pool" test spec
INFO: Deleting namespace machine-pool-a96usw
STEP: Redacting sensitive information from logs


• [SLOW TEST:2260.055 seconds]
... skipping 56 lines ...
STEP: Dumping logs from the "node-drain-r71bpi" workload cluster
STEP: Dumping workload cluster node-drain-fwuq2o/node-drain-r71bpi logs
Oct 27 08:08:45.384: INFO: INFO: Collecting logs for node node-drain-r71bpi-control-plane-26pq9 in cluster node-drain-r71bpi in namespace node-drain-fwuq2o

Oct 27 08:10:55.492: INFO: INFO: Collecting boot logs for AzureMachine node-drain-r71bpi-control-plane-26pq9

Failed to get logs for machine node-drain-r71bpi-control-plane-nrb2b, cluster node-drain-fwuq2o/node-drain-r71bpi: dialing public load balancer at node-drain-r71bpi-4b62b8fb.westus2.cloudapp.azure.com: dial tcp 20.99.180.129:22: connect: connection timed out
STEP: Dumping workload cluster node-drain-fwuq2o/node-drain-r71bpi kube-system pod logs
STEP: Fetching kube-system pod logs took 564.864843ms
STEP: Dumping workload cluster node-drain-fwuq2o/node-drain-r71bpi Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-sn8bj, container coredns
STEP: Creating log watcher for controller kube-system/kube-apiserver-node-drain-r71bpi-control-plane-26pq9, container kube-apiserver
STEP: Creating log watcher for controller kube-system/calico-node-4q4r5, container calico-node
... skipping 6 lines ...
STEP: Fetching activity logs took 1.098053252s
STEP: Dumping all the Cluster API resources in the "node-drain-fwuq2o" namespace
STEP: Deleting cluster node-drain-fwuq2o/node-drain-r71bpi
STEP: Deleting cluster node-drain-r71bpi
INFO: Waiting for the Cluster node-drain-fwuq2o/node-drain-r71bpi to be deleted
STEP: Waiting for cluster node-drain-r71bpi to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-sn8bj, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-node-drain-r71bpi-control-plane-26pq9, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-vf45m, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-node-drain-r71bpi-control-plane-26pq9, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-4q4r5, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-96q8n, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-wbgbv, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-node-drain-r71bpi-control-plane-26pq9, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-node-drain-r71bpi-control-plane-26pq9, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "node-drain" test spec
INFO: Deleting namespace node-drain-fwuq2o
STEP: Redacting sensitive information from logs


• [SLOW TEST:1887.815 seconds]
... skipping 7 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Running the Cluster API E2E tests Running the self-hosted spec [It] Should pivot the bootstrap cluster to a self-hosted cluster 
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/framework/cluster_helpers.go:134

Ran 12 of 22 Specs in 7392.307 seconds
FAIL! -- 11 Passed | 1 Failed | 0 Pending | 10 Skipped


Ginkgo ran 1 suite in 2h4m36.120026567s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
  - For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta
  - To comment, chime in at https://github.com/onsi/ginkgo/issues/711

To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...