This job view page is being replaced by Spyglass soon. Check out the new job view.
PRCecileRobertMichon: Update CAPI to v0.4.3
ResultFAILURE
Tests 1 failed / 11 succeeded
Started2021-09-27 22:12
Elapsed2h3m
Revision76853c9a14e099ae7712a9902abe6dd3c77c289d
Refs 1728

Test Failures


capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec in a single control plane cluster Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd 14m12s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sRunning\sthe\sCluster\sAPI\sE2E\stests\sRunning\sthe\sKCP\supgrade\sspec\sin\sa\ssingle\scontrol\splane\scluster\sShould\ssuccessfully\supgrade\sKubernetes\,\sDNS\,\skube\-proxy\,\sand\setcd$'
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.3/e2e/kcp_upgrade.go:75
Expected success, but got an error:
    <errors.aggregate | len:1, cap:1>: [
        <*errors.StatusError | 0xc000594460>{
            ErrStatus: {
                TypeMeta: {Kind: "", APIVersion: ""},
                ListMeta: {
                    SelfLink: "",
                    ResourceVersion: "",
                    Continue: "",
                    RemainingItemCount: nil,
                },
                Status: "Failure",
                Message: "admission webhook \"validation.kubeadmcontrolplane.controlplane.cluster.x-k8s.io\" denied the request: KubeadmControlPlane.controlplane.cluster.x-k8s.io \"kcp-upgrade-uuotly-control-plane\" is invalid: spec.kubeadmConfigSpec.clusterConfiguration.etcd.local.dataDir: Forbidden: cannot be modified",
                Reason: "Invalid",
                Details: {
                    Name: "kcp-upgrade-uuotly-control-plane",
                    Group: "controlplane.cluster.x-k8s.io",
                    Kind: "KubeadmControlPlane",
                    UID: "",
                    Causes: [
                        {
                            Type: "FieldValueForbidden",
                            Message: "Forbidden: cannot be modified",
                            Field: "spec.kubeadmConfigSpec.clusterConfiguration.etcd.local.dataDir",
                        },
                        {
                            Type: "FieldValueForbidden",
                            Message: "Forbidden: cannot be modified",
                            Field: "spec.kubeadmConfigSpec.clusterConfiguration.etcd.local.dataDir",
                        },
                        {
                            Type: "FieldValueForbidden",
                            Message: "Forbidden: cannot be modified",
                            Field: "spec.kubeadmConfigSpec.clusterConfiguration.etcd.local.dataDir",
                        },
                    ],
                    RetryAfterSeconds: 0,
                },
                Code: 422,
            },
        },
    ]
    admission webhook "validation.kubeadmcontrolplane.controlplane.cluster.x-k8s.io" denied the request: KubeadmControlPlane.controlplane.cluster.x-k8s.io "kcp-upgrade-uuotly-control-plane" is invalid: spec.kubeadmConfigSpec.clusterConfiguration.etcd.local.dataDir: Forbidden: cannot be modified
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.3/framework/controlplane_helpers.go:322
				
				Click to see stdout/stderrfrom junit.e2e_suite.2.xml

Filter through log files | View test history on testgrid


Show 11 Passed Tests

Show 10 Skipped Tests

Error lines from build-log.txt

... skipping 492 lines ...
STEP: Fetching activity logs took 829.564378ms
STEP: Dumping all the Cluster API resources in the "quick-start-ss54c1" namespace
STEP: Deleting cluster quick-start-ss54c1/quick-start-gy4vx0
STEP: Deleting cluster quick-start-gy4vx0
INFO: Waiting for the Cluster quick-start-ss54c1/quick-start-gy4vx0 to be deleted
STEP: Waiting for cluster quick-start-gy4vx0 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-quick-start-gy4vx0-control-plane-jjjlv, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-ln57j, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-sqphf, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-dbf9g, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-quick-start-gy4vx0-control-plane-jjjlv, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-jkr8d, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-8bd2k, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-quick-start-gy4vx0-control-plane-jjjlv, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-wdvvd, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-k9sbc, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-quick-start-gy4vx0-control-plane-jjjlv, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "quick-start" test spec
INFO: Deleting namespace quick-start-ss54c1
STEP: Redacting sensitive information from logs


• [SLOW TEST:686.717 seconds]
... skipping 68 lines ...
STEP: Fetching activity logs took 542.900805ms
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-p5dswv" namespace
STEP: Deleting cluster kcp-upgrade-p5dswv/kcp-upgrade-uuotly
STEP: Deleting cluster kcp-upgrade-uuotly
INFO: Waiting for the Cluster kcp-upgrade-p5dswv/kcp-upgrade-uuotly to be deleted
STEP: Waiting for cluster kcp-upgrade-uuotly to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-qqxsk, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-uuotly-control-plane-zhj29, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-dlj5c, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-uuotly-control-plane-zhj29, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-kvrd6, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-rnzpw, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-msfvs, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-uuotly-control-plane-zhj29, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-mq8d5, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-zx4jh, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-uuotly-control-plane-zhj29, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-p5dswv
STEP: Redacting sensitive information from logs


• Failure [852.612 seconds]
Running the Cluster API E2E tests
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:40
  Running the KCP upgrade spec in a single control plane cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:103
    Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd [It]
    /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.3/e2e/kcp_upgrade.go:75

    Expected success, but got an error:
        <errors.aggregate | len:1, cap:1>: [
            <*errors.StatusError | 0xc000594460>{
                ErrStatus: {
                    TypeMeta: {Kind: "", APIVersion: ""},
                    ListMeta: {
                        SelfLink: "",
... skipping 119 lines ...
Sep 27 22:38:43.882: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-cvgaqt-control-plane-7t5vm

Sep 27 22:38:45.155: INFO: INFO: Collecting logs for node md-rollout-cvgaqt-md-0-284hm in cluster md-rollout-cvgaqt in namespace md-rollout-2s5ybn

Sep 27 22:40:58.993: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-cvgaqt-md-0-284hm

Failed to get logs for machine md-rollout-cvgaqt-md-0-75c47ccfbf-v5nh4, cluster md-rollout-2s5ybn/md-rollout-cvgaqt: [dialing from control plane to target node at md-rollout-cvgaqt-md-0-284hm: ssh: rejected: connect failed (Connection timed out), failed to get boot diagnostics data: compute.VirtualMachinesClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ResourceNotFound" Message="The Resource 'Microsoft.Compute/virtualMachines/md-rollout-cvgaqt-md-0-284hm' under resource group 'capz-e2e-cp6nfl' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"]
Sep 27 22:40:59.754: INFO: INFO: Collecting logs for node md-rollout-cvgaqt-md-0-ia3eot-8qrv7 in cluster md-rollout-cvgaqt in namespace md-rollout-2s5ybn

Sep 27 22:41:13.409: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-cvgaqt-md-0-ia3eot-8qrv7

STEP: Dumping workload cluster md-rollout-2s5ybn/md-rollout-cvgaqt kube-system pod logs
STEP: Fetching kube-system pod logs took 981.031132ms
... skipping 116 lines ...
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-04whv2-control-plane-wrrz4, container etcd
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-04whv2-control-plane-wrrz4, container kube-scheduler
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-n5ggr, container coredns
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-2b9lf, container coredns
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-04whv2-control-plane-s6wg2, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-04whv2-control-plane-wrrz4, container kube-apiserver
STEP: Got error while iterating over activity logs for resource group capz-e2e-cmjyhb: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000584593s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-gt3kdr" namespace
STEP: Deleting cluster kcp-upgrade-gt3kdr/kcp-upgrade-04whv2
STEP: Deleting cluster kcp-upgrade-04whv2
INFO: Waiting for the Cluster kcp-upgrade-gt3kdr/kcp-upgrade-04whv2 to be deleted
STEP: Waiting for cluster kcp-upgrade-04whv2 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-04whv2-control-plane-vc5bj, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-hdwdl, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-l7ckp, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-kbt7m, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-04whv2-control-plane-vc5bj, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-04whv2-control-plane-wrrz4, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-wnssb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-bkmh2, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rkvgr, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-m5mwv, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-2b9lf, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-04whv2-control-plane-wrrz4, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-04whv2-control-plane-s6wg2, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-04whv2-control-plane-s6wg2, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-n5ggr, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-qjq52, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-04whv2-control-plane-vc5bj, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-q72zl, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-04whv2-control-plane-wrrz4, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-04whv2-control-plane-wrrz4, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-04whv2-control-plane-s6wg2, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-04whv2-control-plane-s6wg2, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-04whv2-control-plane-vc5bj, container etcd: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-gt3kdr
STEP: Redacting sensitive information from logs


• [SLOW TEST:2353.287 seconds]
... skipping 7 lines ...
Running the Cluster API E2E tests Running the self-hosted spec 
  Should pivot the bootstrap cluster to a self-hosted cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:103

STEP: Creating namespace "self-hosted" for hosting the cluster
Sep 27 22:48:27.186: INFO: starting to create namespace for hosting the "self-hosted" test spec
2021/09/27 22:48:27 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found
INFO: Creating namespace self-hosted
INFO: Creating event watcher for namespace "self-hosted"
STEP: Creating a workload cluster
INFO: Creating the workload cluster with name "self-hosted-9zkfk3" using the "(default)" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
INFO: clusterctl config cluster self-hosted-9zkfk3 --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 1 --worker-machine-count 1 --flavor (default)
... skipping 75 lines ...
STEP: Fetching activity logs took 491.919179ms
STEP: Dumping all the Cluster API resources in the "self-hosted" namespace
STEP: Deleting all clusters in the self-hosted namespace
STEP: Deleting cluster self-hosted-9zkfk3
INFO: Waiting for the Cluster self-hosted/self-hosted-9zkfk3 to be deleted
STEP: Waiting for cluster self-hosted-9zkfk3 to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-kzszl, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-self-hosted-9zkfk3-control-plane-4k7d6, container kube-apiserver: http2: client connection lost
INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-67bfdf96f9-rfgsx, container kube-rbac-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-n6hvp, container calico-node: http2: client connection lost
INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-67bfdf96f9-rfgsx, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-rc66q, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-qxrpj, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-j769k, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-drcj2, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-self-hosted-9zkfk3-control-plane-4k7d6, container kube-controller-manager: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-bfcd78f99-vlnsl, container manager: http2: client connection lost
INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-66b74b44bd-8tgz7, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-self-hosted-9zkfk3-control-plane-4k7d6, container kube-scheduler: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-54f94494bd-5mrt6, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-28g2j, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-self-hosted-9zkfk3-control-plane-4k7d6, container etcd: http2: client connection lost
STEP: Deleting namespace used for hosting the "self-hosted" test spec
INFO: Deleting namespace self-hosted
STEP: Checking if any resources are left over in Azure for spec "self-hosted"
STEP: Redacting sensitive information from logs
STEP: Redacting sensitive information from logs

... skipping 93 lines ...
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-go5t38-control-plane-btkrf, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-nj6lh, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-go5t38-control-plane-h8hwm, container kube-scheduler
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-spvdq, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-bffzm, container kube-proxy
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-go5t38-control-plane-h8hwm, container etcd
STEP: Got error while iterating over activity logs for resource group capz-e2e-mrc97u: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000440093s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-smagqt" namespace
STEP: Deleting cluster kcp-upgrade-smagqt/kcp-upgrade-go5t38
STEP: Deleting cluster kcp-upgrade-go5t38
INFO: Waiting for the Cluster kcp-upgrade-smagqt/kcp-upgrade-go5t38 to be deleted
STEP: Waiting for cluster kcp-upgrade-go5t38 to be deleted
... skipping 80 lines ...
STEP: Fetching activity logs took 698.544089ms
STEP: Dumping all the Cluster API resources in the "mhc-remediation-kaqr7l" namespace
STEP: Deleting cluster mhc-remediation-kaqr7l/mhc-remediation-5andww
STEP: Deleting cluster mhc-remediation-5andww
INFO: Waiting for the Cluster mhc-remediation-kaqr7l/mhc-remediation-5andww to be deleted
STEP: Waiting for cluster mhc-remediation-5andww to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-wmxq8, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-5andww-control-plane-qx587, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-5andww-control-plane-qx587, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-pgbpj, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-5andww-control-plane-qx587, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-nfx4w, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-2qhdp, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-k7v85, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-cs8xv, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-5andww-control-plane-qx587, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-dz82d, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
INFO: Deleting namespace mhc-remediation-kaqr7l
STEP: Redacting sensitive information from logs


• [SLOW TEST:1117.210 seconds]
... skipping 166 lines ...
STEP: Fetching activity logs took 983.058908ms
STEP: Dumping all the Cluster API resources in the "mhc-remediation-znkfmr" namespace
STEP: Deleting cluster mhc-remediation-znkfmr/mhc-remediation-1oibje
STEP: Deleting cluster mhc-remediation-1oibje
INFO: Waiting for the Cluster mhc-remediation-znkfmr/mhc-remediation-1oibje to be deleted
STEP: Waiting for cluster mhc-remediation-1oibje to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-4b686, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-tl6w5, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-krds9, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-46nft, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-1oibje-control-plane-fzjws, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-1oibje-control-plane-8m2k5, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-mq7r4, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-lzxp9, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-lpf5d, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-1oibje-control-plane-fzjws, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-1oibje-control-plane-fzjws, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-1oibje-control-plane-txk25, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-x7rpc, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-pnvsb, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-2tps7, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-1oibje-control-plane-8m2k5, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-1oibje-control-plane-8m2k5, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-1oibje-control-plane-txk25, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-1oibje-control-plane-txk25, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-1oibje-control-plane-8m2k5, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-1oibje-control-plane-txk25, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-1oibje-control-plane-fzjws, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-8kwvp, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
INFO: Deleting namespace mhc-remediation-znkfmr
STEP: Redacting sensitive information from logs


• [SLOW TEST:1576.954 seconds]
... skipping 70 lines ...
STEP: Creating log watcher for controller kube-system/kube-scheduler-machine-pool-2v833e-control-plane-4c9x5, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-node-t2q69, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-ldnbk, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-sdxfh, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-z9lqc, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-6rhss, container kube-proxy
STEP: Error starting logs stream for pod kube-system/calico-node-7gj8c, container calico-node: pods "machine-pool-2v833e-mp-0000001" not found
STEP: Error starting logs stream for pod kube-system/kube-proxy-ldnbk, container kube-proxy: pods "machine-pool-2v833e-mp-0000001" not found
STEP: Error starting logs stream for pod kube-system/calico-node-sdxfh, container calico-node: pods "machine-pool-2v833e-mp-0000000" not found
STEP: Error starting logs stream for pod kube-system/kube-proxy-z9lqc, container kube-proxy: pods "machine-pool-2v833e-mp-0000000" not found
STEP: Fetching activity logs took 498.307377ms
STEP: Dumping all the Cluster API resources in the "machine-pool-9kumpa" namespace
STEP: Deleting cluster machine-pool-9kumpa/machine-pool-2v833e
STEP: Deleting cluster machine-pool-2v833e
INFO: Waiting for the Cluster machine-pool-9kumpa/machine-pool-2v833e to be deleted
STEP: Waiting for cluster machine-pool-2v833e to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-lczk6, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-machine-pool-2v833e-control-plane-4c9x5, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-machine-pool-2v833e-control-plane-4c9x5, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-machine-pool-2v833e-control-plane-4c9x5, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-q48fd, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-6rhss, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-lg44d, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-machine-pool-2v833e-control-plane-4c9x5, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-t2q69, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-g7qz6, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-x8jhk, container calico-kube-controllers: http2: client connection lost
STEP: Deleting namespace used for hosting the "machine-pool" test spec
INFO: Deleting namespace machine-pool-9kumpa
STEP: Redacting sensitive information from logs


• [SLOW TEST:1199.189 seconds]
... skipping 73 lines ...
STEP: Fetching activity logs took 523.785931ms
STEP: Dumping all the Cluster API resources in the "md-scale-78h6lw" namespace
STEP: Deleting cluster md-scale-78h6lw/md-scale-cdksox
STEP: Deleting cluster md-scale-cdksox
INFO: Waiting for the Cluster md-scale-78h6lw/md-scale-cdksox to be deleted
STEP: Waiting for cluster md-scale-cdksox to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-wpxh2, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-md-scale-cdksox-control-plane-9skk7, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-jwrk4, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-scale-cdksox-control-plane-9skk7, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-md-scale-cdksox-control-plane-9skk7, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-8qwwc, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-scale-cdksox-control-plane-9skk7, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-96lww, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-pqsqm, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-v26hm, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-zs84j, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-scale" test spec
INFO: Deleting namespace md-scale-78h6lw
STEP: Redacting sensitive information from logs


• [SLOW TEST:1253.596 seconds]
... skipping 56 lines ...
STEP: Dumping logs from the "node-drain-oy474r" workload cluster
STEP: Dumping workload cluster node-drain-49zolo/node-drain-oy474r logs
Sep 28 00:05:14.460: INFO: INFO: Collecting logs for node node-drain-oy474r-control-plane-pjqll in cluster node-drain-oy474r in namespace node-drain-49zolo

Sep 28 00:07:24.878: INFO: INFO: Collecting boot logs for AzureMachine node-drain-oy474r-control-plane-pjqll

Failed to get logs for machine node-drain-oy474r-control-plane-8tt8n, cluster node-drain-49zolo/node-drain-oy474r: dialing public load balancer at node-drain-oy474r-781b1ca8.northeurope.cloudapp.azure.com: dial tcp 168.61.89.151:22: connect: connection timed out
STEP: Dumping workload cluster node-drain-49zolo/node-drain-oy474r kube-system pod logs
STEP: Fetching kube-system pod logs took 934.246389ms
STEP: Dumping workload cluster node-drain-49zolo/node-drain-oy474r Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-9rtz5, container coredns
STEP: Creating log watcher for controller kube-system/etcd-node-drain-oy474r-control-plane-pjqll, container etcd
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-2trn8, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-controller-manager-node-drain-oy474r-control-plane-pjqll, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-node-v5p9v, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-tzxlt, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-apiserver-node-drain-oy474r-control-plane-pjqll, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-scheduler-node-drain-oy474r-control-plane-pjqll, container kube-scheduler
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-xhnrg, container coredns
STEP: Got error while iterating over activity logs for resource group capz-e2e-5535ea: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.00034678s
STEP: Dumping all the Cluster API resources in the "node-drain-49zolo" namespace
STEP: Deleting cluster node-drain-49zolo/node-drain-oy474r
STEP: Deleting cluster node-drain-oy474r
INFO: Waiting for the Cluster node-drain-49zolo/node-drain-oy474r to be deleted
STEP: Waiting for cluster node-drain-oy474r to be deleted
... skipping 13 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Running the Cluster API E2E tests Running the KCP upgrade spec in a single control plane cluster [It] Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd 
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.3/framework/controlplane_helpers.go:322

Ran 12 of 22 Specs in 7053.336 seconds
FAIL! -- 11 Passed | 1 Failed | 0 Pending | 10 Skipped


Ginkgo ran 1 suite in 1h58m53.149044799s
Test Suite Failed
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...