This job view page is being replaced by Spyglass soon. Check out the new job view.
PRshysank: v1beta1 cluster upgrade tests (using clusterctl upgrade)
ResultFAILURE
Tests 1 failed / 12 succeeded
Started2021-10-17 22:18
Elapsed2h52m
Revision8ef1b69014470ffb722e49f6809a4dcbfe6a6c78
Refs 1771

Test Failures


capz-e2e Running the Cluster API E2E tests upgrade from v1alpha3 to v1beta1, and scale workload clusters created in v1alpha3 Should create a management cluster and then upgrade all the providers 1h8m

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sRunning\sthe\sCluster\sAPI\sE2E\stests\supgrade\sfrom\sv1alpha3\sto\sv1beta1\,\sand\sscale\sworkload\sclusters\screated\sin\sv1alpha3\sShould\screate\sa\smanagement\scluster\sand\sthen\supgrade\sall\sthe\sproviders$'
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/e2e/clusterctl_upgrade.go:145
Timed out after 1200.001s.
Expected
    <int>: 1
to equal
    <int>: 2
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/framework/machinedeployment_helpers.go:348
				
				Click to see stdout/stderrfrom junit.e2e_suite.2.xml

Filter through log files | View test history on testgrid


Show 12 Passed Tests

Show 10 Skipped Tests

Error lines from build-log.txt

... skipping 491 lines ...
STEP: Fetching activity logs took 846.320144ms
STEP: Dumping all the Cluster API resources in the "quick-start-tgktgt" namespace
STEP: Deleting cluster quick-start-tgktgt/quick-start-pdl037
STEP: Deleting cluster quick-start-pdl037
INFO: Waiting for the Cluster quick-start-tgktgt/quick-start-pdl037 to be deleted
STEP: Waiting for cluster quick-start-pdl037 to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-quick-start-pdl037-control-plane-xmg2j, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-2m55k, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-quick-start-pdl037-control-plane-xmg2j, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-quick-start-pdl037-control-plane-xmg2j, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-ktgx5, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-hmjsj, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-sgzxf, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-5lfmn, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-quick-start-pdl037-control-plane-xmg2j, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "quick-start" test spec
INFO: Deleting namespace quick-start-tgktgt
STEP: Redacting sensitive information from logs


• [SLOW TEST:716.970 seconds]
... skipping 52 lines ...
STEP: Dumping logs from the "kcp-upgrade-u360w2" workload cluster
STEP: Dumping workload cluster kcp-upgrade-to8lt3/kcp-upgrade-u360w2 logs
Oct 17 22:36:56.796: INFO: INFO: Collecting logs for node kcp-upgrade-u360w2-control-plane-2zbp5 in cluster kcp-upgrade-u360w2 in namespace kcp-upgrade-to8lt3

Oct 17 22:39:07.684: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-u360w2-control-plane-2zbp5

Failed to get logs for machine kcp-upgrade-u360w2-control-plane-26jrn, cluster kcp-upgrade-to8lt3/kcp-upgrade-u360w2: dialing public load balancer at kcp-upgrade-u360w2-b07fc1c5.northeurope.cloudapp.azure.com: dial tcp 20.67.196.132:22: connect: connection timed out
Oct 17 22:39:10.147: INFO: INFO: Collecting logs for node kcp-upgrade-u360w2-md-0-rgjx7 in cluster kcp-upgrade-u360w2 in namespace kcp-upgrade-to8lt3

Oct 17 22:41:20.804: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-u360w2-md-0-rgjx7

Failed to get logs for machine kcp-upgrade-u360w2-md-0-5c76577774-76kgh, cluster kcp-upgrade-to8lt3/kcp-upgrade-u360w2: dialing public load balancer at kcp-upgrade-u360w2-b07fc1c5.northeurope.cloudapp.azure.com: dial tcp 20.67.196.132:22: connect: connection timed out
STEP: Dumping workload cluster kcp-upgrade-to8lt3/kcp-upgrade-u360w2 kube-system pod logs
STEP: Fetching kube-system pod logs took 964.459474ms
STEP: Dumping workload cluster kcp-upgrade-to8lt3/kcp-upgrade-u360w2 Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-5cm5v, container calico-node
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-u360w2-control-plane-2zbp5, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-vgz9p, container kube-proxy
... skipping 8 lines ...
STEP: Fetching activity logs took 1.536420392s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-to8lt3" namespace
STEP: Deleting cluster kcp-upgrade-to8lt3/kcp-upgrade-u360w2
STEP: Deleting cluster kcp-upgrade-u360w2
INFO: Waiting for the Cluster kcp-upgrade-to8lt3/kcp-upgrade-u360w2 to be deleted
STEP: Waiting for cluster kcp-upgrade-u360w2 to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-67d6x, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-u360w2-control-plane-2zbp5, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-kfpgl, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-5cm5v, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-76hpg, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-79642, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-u360w2-control-plane-2zbp5, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-kpp85, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vgz9p, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-u360w2-control-plane-2zbp5, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-u360w2-control-plane-2zbp5, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-to8lt3
STEP: Redacting sensitive information from logs


• [SLOW TEST:1328.705 seconds]
... skipping 53 lines ...
Oct 17 22:54:38.035: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-j2fmbc-control-plane-4zswz

Oct 17 22:54:39.368: INFO: INFO: Collecting logs for node md-rollout-j2fmbc-md-0-zptvg in cluster md-rollout-j2fmbc in namespace md-rollout-ox8aav

Oct 17 22:56:52.763: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-j2fmbc-md-0-zptvg

Failed to get logs for machine md-rollout-j2fmbc-md-0-5bf68b47b7-pxjzr, cluster md-rollout-ox8aav/md-rollout-j2fmbc: [dialing from control plane to target node at md-rollout-j2fmbc-md-0-zptvg: ssh: rejected: connect failed (Connection timed out), failed to get boot diagnostics data: compute.VirtualMachinesClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ResourceNotFound" Message="The Resource 'Microsoft.Compute/virtualMachines/md-rollout-j2fmbc-md-0-zptvg' under resource group 'capz-e2e-4cqjon' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"]
Oct 17 22:56:53.377: INFO: INFO: Collecting logs for node md-rollout-j2fmbc-md-0-9enx1k-d7fqb in cluster md-rollout-j2fmbc in namespace md-rollout-ox8aav

Oct 17 22:57:03.287: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-j2fmbc-md-0-9enx1k-d7fqb

STEP: Dumping workload cluster md-rollout-ox8aav/md-rollout-j2fmbc kube-system pod logs
STEP: Fetching kube-system pod logs took 946.936391ms
... skipping 12 lines ...
STEP: Fetching activity logs took 574.220355ms
STEP: Dumping all the Cluster API resources in the "md-rollout-ox8aav" namespace
STEP: Deleting cluster md-rollout-ox8aav/md-rollout-j2fmbc
STEP: Deleting cluster md-rollout-j2fmbc
INFO: Waiting for the Cluster md-rollout-ox8aav/md-rollout-j2fmbc to be deleted
STEP: Waiting for cluster md-rollout-j2fmbc to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-ms86q, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-6c8z7, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-md-rollout-j2fmbc-control-plane-4zswz, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-wr56s, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-rollout-j2fmbc-control-plane-4zswz, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-lbwv4, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-558lf, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-md-rollout-j2fmbc-control-plane-4zswz, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-spzdx, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-9zkdj, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-rollout-j2fmbc-control-plane-4zswz, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-rollout" test spec
INFO: Deleting namespace md-rollout-ox8aav
STEP: Redacting sensitive information from logs


• [SLOW TEST:952.786 seconds]
... skipping 92 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-4858g, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-t2gdz, container coredns
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-4nuyz0-control-plane-v9cnz, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-qf5bn, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-4nuyz0-control-plane-q9nnh, container kube-scheduler
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-4nuyz0-control-plane-zkjqb, container etcd
STEP: Got error while iterating over activity logs for resource group capz-e2e-z6t00h: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001211575s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-zc6qy2" namespace
STEP: Deleting cluster kcp-upgrade-zc6qy2/kcp-upgrade-4nuyz0
STEP: Deleting cluster kcp-upgrade-4nuyz0
INFO: Waiting for the Cluster kcp-upgrade-zc6qy2/kcp-upgrade-4nuyz0 to be deleted
STEP: Waiting for cluster kcp-upgrade-4nuyz0 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-4nuyz0-control-plane-v9cnz, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-4nuyz0-control-plane-v9cnz, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-fnbss, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-4nuyz0-control-plane-v9cnz, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-4nuyz0-control-plane-q9nnh, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-4nuyz0-control-plane-zkjqb, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-4nuyz0-control-plane-v9cnz, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-4nuyz0-control-plane-zkjqb, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-mjtcw, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-95cpd, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-t2gdz, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-pcbx6, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-t5s6q, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-4858g, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-qf5bn, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-4nuyz0-control-plane-zkjqb, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-4nuyz0-control-plane-q9nnh, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-4nuyz0-control-plane-q9nnh, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-6f88m, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-4nuyz0-control-plane-zkjqb, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-9tg5k, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-4nuyz0-control-plane-q9nnh, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-9296l, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-zc6qy2
STEP: Redacting sensitive information from logs


• [SLOW TEST:2305.305 seconds]
... skipping 91 lines ...
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-w7vz4d-control-plane-nzfz8, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-w7vz4d-control-plane-pbqr9, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-w7vz4d-control-plane-nzfz8, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-w7vz4d-control-plane-pbqr9, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-htmnh, container kube-proxy
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-w7vz4d-control-plane-nzfkb, container etcd
STEP: Got error while iterating over activity logs for resource group capz-e2e-hnxopn: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000715447s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-hwo6ww" namespace
STEP: Deleting cluster kcp-upgrade-hwo6ww/kcp-upgrade-w7vz4d
STEP: Deleting cluster kcp-upgrade-w7vz4d
INFO: Waiting for the Cluster kcp-upgrade-hwo6ww/kcp-upgrade-w7vz4d to be deleted
STEP: Waiting for cluster kcp-upgrade-w7vz4d to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-w7vz4d-control-plane-nzfkb, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-ngf4l, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-w7vz4d-control-plane-nzfkb, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-w7vz4d-control-plane-nzfkb, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-w7vz4d-control-plane-nzfkb, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-htmnh, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-hwo6ww
STEP: Redacting sensitive information from logs


• [SLOW TEST:2345.434 seconds]
... skipping 74 lines ...
STEP: Fetching activity logs took 571.174289ms
STEP: Dumping all the Cluster API resources in the "mhc-remediation-03jqz7" namespace
STEP: Deleting cluster mhc-remediation-03jqz7/mhc-remediation-90825c
STEP: Deleting cluster mhc-remediation-90825c
INFO: Waiting for the Cluster mhc-remediation-03jqz7/mhc-remediation-90825c to be deleted
STEP: Waiting for cluster mhc-remediation-90825c to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-8474n, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-90825c-control-plane-mcmwp, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-90825c-control-plane-mcmwp, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-6w4k7, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-90825c-control-plane-mcmwp, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-vlrxw, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-dpf9d, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rpc7r, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-90825c-control-plane-mcmwp, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-wfqw7, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-s559z, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
INFO: Deleting namespace mhc-remediation-03jqz7
STEP: Redacting sensitive information from logs


• [SLOW TEST:1663.659 seconds]
... skipping 7 lines ...
Running the Cluster API E2E tests Running the self-hosted spec 
  Should pivot the bootstrap cluster to a self-hosted cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:103

STEP: Creating namespace "self-hosted" for hosting the cluster
Oct 17 23:03:04.148: INFO: starting to create namespace for hosting the "self-hosted" test spec
2021/10/17 23:03:04 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found
INFO: Creating namespace self-hosted
INFO: Creating event watcher for namespace "self-hosted"
STEP: Creating a workload cluster
INFO: Creating the workload cluster with name "self-hosted-mr2mao" using the "(default)" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
INFO: clusterctl config cluster self-hosted-mr2mao --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 1 --worker-machine-count 1 --flavor (default)
... skipping 74 lines ...
STEP: Fetching activity logs took 579.976711ms
STEP: Dumping all the Cluster API resources in the "self-hosted" namespace
STEP: Deleting all clusters in the self-hosted namespace
STEP: Deleting cluster self-hosted-mr2mao
INFO: Waiting for the Cluster self-hosted/self-hosted-mr2mao to be deleted
STEP: Waiting for cluster self-hosted-mr2mao to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-mm2j9, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-hcxzw, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-tbmhj, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-8w62n, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-self-hosted-mr2mao-control-plane-xwdqc, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-self-hosted-mr2mao-control-plane-xwdqc, container etcd: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-865c969d7-69lhf, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-self-hosted-mr2mao-control-plane-xwdqc, container kube-controller-manager: http2: client connection lost
INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-6bdc78c4d4-kr7bd, container manager: http2: client connection lost
INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-7db568b6d6-sncrd, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-7bkzx, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-self-hosted-mr2mao-control-plane-xwdqc, container kube-apiserver: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-86b5f554dd-xhvrx, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-z8kzh, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-bvtlg, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "self-hosted" test spec
INFO: Deleting namespace self-hosted
STEP: Checking if any resources are left over in Azure for spec "self-hosted"
STEP: Redacting sensitive information from logs
STEP: Redacting sensitive information from logs

... skipping 54 lines ...
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-gwgnd, container coredns
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-adoption-1p58tv-control-plane-0, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-nlk7w, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-adoption-1p58tv-control-plane-0, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-hm4dw, container coredns
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-adoption-1p58tv-control-plane-0, container kube-scheduler
STEP: Error starting logs stream for pod kube-system/calico-kube-controllers-846b5f484d-lfth4, container calico-kube-controllers: container "calico-kube-controllers" in pod "calico-kube-controllers-846b5f484d-lfth4" is waiting to start: ContainerCreating
STEP: Error starting logs stream for pod kube-system/coredns-78fcd69978-gwgnd, container coredns: container "coredns" in pod "coredns-78fcd69978-gwgnd" is waiting to start: ContainerCreating
STEP: Fetching activity logs took 588.488674ms
STEP: Dumping all the Cluster API resources in the "kcp-adoption-4iq5xc" namespace
STEP: Deleting cluster kcp-adoption-4iq5xc/kcp-adoption-1p58tv
STEP: Deleting cluster kcp-adoption-1p58tv
INFO: Waiting for the Cluster kcp-adoption-4iq5xc/kcp-adoption-1p58tv to be deleted
STEP: Waiting for cluster kcp-adoption-1p58tv to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-nlk7w, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-adoption-1p58tv-control-plane-0, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-adoption-1p58tv-control-plane-0, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-adoption-1p58tv-control-plane-0, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-adoption-1p58tv-control-plane-0, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-qlr67, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-hm4dw, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-adoption" test spec
INFO: Deleting namespace kcp-adoption-4iq5xc
STEP: Redacting sensitive information from logs


• [SLOW TEST:607.958 seconds]
... skipping 70 lines ...
STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-vpsz7, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-4wfvq, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-6hqxb, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-sl4d7, container calico-node
STEP: Creating log watcher for controller kube-system/kube-apiserver-machine-pool-dwzzcc-control-plane-pglgk, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-scheduler-machine-pool-dwzzcc-control-plane-pglgk, container kube-scheduler
STEP: Error starting logs stream for pod kube-system/calico-node-dkqxn, container calico-node: pods "machine-pool-dwzzcc-mp-0000001" not found
STEP: Error starting logs stream for pod kube-system/calico-node-sl4d7, container calico-node: pods "machine-pool-dwzzcc-mp-0000000" not found
STEP: Error starting logs stream for pod kube-system/kube-proxy-zvvrn, container kube-proxy: pods "machine-pool-dwzzcc-mp-0000001" not found
STEP: Error starting logs stream for pod kube-system/kube-proxy-4wfvq, container kube-proxy: pods "machine-pool-dwzzcc-mp-0000000" not found
STEP: Fetching activity logs took 650.435452ms
STEP: Dumping all the Cluster API resources in the "machine-pool-dpx9y6" namespace
STEP: Deleting cluster machine-pool-dpx9y6/machine-pool-dwzzcc
STEP: Deleting cluster machine-pool-dwzzcc
INFO: Waiting for the Cluster machine-pool-dpx9y6/machine-pool-dwzzcc to be deleted
STEP: Waiting for cluster machine-pool-dwzzcc to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-machine-pool-dwzzcc-control-plane-pglgk, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-machine-pool-dwzzcc-control-plane-pglgk, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-machine-pool-dwzzcc-control-plane-pglgk, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-bj9lb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-qvfqs, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-r86kk, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-trffg, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-dm8jq, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-vpsz7, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-6hqxb, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-machine-pool-dwzzcc-control-plane-pglgk, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "machine-pool" test spec
INFO: Deleting namespace machine-pool-dpx9y6
STEP: Redacting sensitive information from logs


• [SLOW TEST:1473.902 seconds]
... skipping 175 lines ...
STEP: Creating log watcher for controller kube-system/etcd-mhc-remediation-7ejx9b-control-plane-b42wh, container etcd
STEP: Creating log watcher for controller kube-system/calico-node-xbfw7, container calico-node
STEP: Fetching kube-system pod logs took 578.773427ms
STEP: Dumping workload cluster mhc-remediation-pg631c/mhc-remediation-7ejx9b Azure activity log
STEP: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-7ejx9b-control-plane-b42wh, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-controller-manager-mhc-remediation-7ejx9b-control-plane-b42wh, container kube-controller-manager
STEP: Got error while iterating over activity logs for resource group capz-e2e-2y6syz: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001482706s
STEP: Dumping all the Cluster API resources in the "mhc-remediation-pg631c" namespace
STEP: Deleting cluster mhc-remediation-pg631c/mhc-remediation-7ejx9b
STEP: Deleting cluster mhc-remediation-7ejx9b
INFO: Waiting for the Cluster mhc-remediation-pg631c/mhc-remediation-7ejx9b to be deleted
STEP: Waiting for cluster mhc-remediation-7ejx9b to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-7ncdt, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-bk96f, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-xbfw7, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-hxdnt, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-7ejx9b-control-plane-b42wh, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rmwlb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-7ejx9b-control-plane-b42wh, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-7ejx9b-control-plane-b42wh, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-9ctxp, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-7ejx9b-control-plane-b42wh, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
INFO: Deleting namespace mhc-remediation-pg631c
STEP: Redacting sensitive information from logs


• [SLOW TEST:3460.104 seconds]
... skipping 58 lines ...
STEP: Dumping logs from the "node-drain-6a5vfo" workload cluster
STEP: Dumping workload cluster node-drain-835owf/node-drain-6a5vfo logs
Oct 18 00:22:43.489: INFO: INFO: Collecting logs for node node-drain-6a5vfo-control-plane-svsfv in cluster node-drain-6a5vfo in namespace node-drain-835owf

Oct 18 00:24:54.436: INFO: INFO: Collecting boot logs for AzureMachine node-drain-6a5vfo-control-plane-svsfv

Failed to get logs for machine node-drain-6a5vfo-control-plane-ws2sm, cluster node-drain-835owf/node-drain-6a5vfo: dialing public load balancer at node-drain-6a5vfo-f33f678.northeurope.cloudapp.azure.com: dial tcp 20.82.199.66:22: connect: connection timed out
STEP: Dumping workload cluster node-drain-835owf/node-drain-6a5vfo kube-system pod logs
STEP: Fetching kube-system pod logs took 891.64463ms
STEP: Dumping workload cluster node-drain-835owf/node-drain-6a5vfo Azure activity log
STEP: Creating log watcher for controller kube-system/etcd-node-drain-6a5vfo-control-plane-svsfv, container etcd
STEP: Creating log watcher for controller kube-system/kube-apiserver-node-drain-6a5vfo-control-plane-svsfv, container kube-apiserver
STEP: Creating log watcher for controller kube-system/calico-node-g2ww9, container calico-node
... skipping 6 lines ...
STEP: Fetching activity logs took 1.152386998s
STEP: Dumping all the Cluster API resources in the "node-drain-835owf" namespace
STEP: Deleting cluster node-drain-835owf/node-drain-6a5vfo
STEP: Deleting cluster node-drain-6a5vfo
INFO: Waiting for the Cluster node-drain-835owf/node-drain-6a5vfo to be deleted
STEP: Waiting for cluster node-drain-6a5vfo to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-l8hgw, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-2n8rs, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-node-drain-6a5vfo-control-plane-svsfv, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-qt582, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-node-drain-6a5vfo-control-plane-svsfv, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-node-drain-6a5vfo-control-plane-svsfv, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-g2ww9, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-b8w4k, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-node-drain-6a5vfo-control-plane-svsfv, container etcd: http2: client connection lost
STEP: Deleting namespace used for hosting the "node-drain" test spec
INFO: Deleting namespace node-drain-835owf
STEP: Redacting sensitive information from logs


• [SLOW TEST:2153.487 seconds]
... skipping 164 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Running the Cluster API E2E tests upgrade from v1alpha3 to v1beta1, and scale workload clusters created in v1alpha3 [It] Should create a management cluster and then upgrade all the providers 
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/framework/machinedeployment_helpers.go:348

Ran 13 of 23 Specs in 9979.379 seconds
FAIL! -- 12 Passed | 1 Failed | 0 Pending | 10 Skipped


Ginkgo ran 1 suite in 2h47m38.5245679s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
  - For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta
  - To comment, chime in at https://github.com/onsi/ginkgo/issues/711

To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...