This job view page is being replaced by Spyglass soon. Check out the new job view.
PRCecileRobertMichon: Update CAPI to v0.4.3
ResultFAILURE
Tests 1 failed / 11 succeeded
Started2021-09-27 19:59
Elapsed2h3m
Revision76853c9a14e099ae7712a9902abe6dd3c77c289d
Refs 1728

Test Failures


capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation 25m58s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sRunning\sthe\sCluster\sAPI\sE2E\stests\sShould\ssuccessfully\sremediate\sunhealthy\smachines\swith\sMachineHealthCheck\sShould\ssuccessfully\strigger\sKCP\sremediation$'
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.3/e2e/mhc_remediations.go:115
Failed to get controller-runtime client
Unexpected error:
    <*url.Error | 0xc00157be30>: {
        Op: "Get",
        URL: "https://mhc-remediation-qa7c9u-7959f1f0.northeurope.cloudapp.azure.com:6443/api?timeout=32s",
        Err: <*http.httpError | 0xc0050bdb00>{
            err: "net/http: request canceled (Client.Timeout exceeded while awaiting headers)",
            timeout: true,
        },
    }
    Get "https://mhc-remediation-qa7c9u-7959f1f0.northeurope.cloudapp.azure.com:6443/api?timeout=32s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
occurred
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.3/framework/cluster_proxy.go:171
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 11 Passed Tests

Show 10 Skipped Tests

Error lines from build-log.txt

... skipping 487 lines ...
STEP: Creating log watcher for controller kube-system/kube-controller-manager-quick-start-m82yzf-control-plane-zmx89, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-node-76sgg, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-g7q7b, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-7545m, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-b4xrx, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-5nrb4, container kube-proxy
STEP: Error starting logs stream for pod kube-system/calico-node-76sgg, container calico-node: container "calico-node" in pod "calico-node-76sgg" is waiting to start: PodInitializing
STEP: Fetching activity logs took 575.032613ms
STEP: Dumping all the Cluster API resources in the "quick-start-q37voa" namespace
STEP: Deleting cluster quick-start-q37voa/quick-start-m82yzf
STEP: Deleting cluster quick-start-m82yzf
INFO: Waiting for the Cluster quick-start-q37voa/quick-start-m82yzf to be deleted
STEP: Waiting for cluster quick-start-m82yzf to be deleted
... skipping 58 lines ...
STEP: Dumping logs from the "kcp-upgrade-a688i1" workload cluster
STEP: Dumping workload cluster kcp-upgrade-daxuee/kcp-upgrade-a688i1 logs
Sep 27 20:21:50.186: INFO: INFO: Collecting logs for node kcp-upgrade-a688i1-control-plane-zhd4m in cluster kcp-upgrade-a688i1 in namespace kcp-upgrade-daxuee

Sep 27 20:23:59.554: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-a688i1-control-plane-zhd4m

Failed to get logs for machine kcp-upgrade-a688i1-control-plane-tkj8r, cluster kcp-upgrade-daxuee/kcp-upgrade-a688i1: dialing public load balancer at kcp-upgrade-a688i1-faaeac3d.northeurope.cloudapp.azure.com: dial tcp 40.115.104.196:22: connect: connection timed out
Sep 27 20:24:00.766: INFO: INFO: Collecting logs for node kcp-upgrade-a688i1-md-0-twwj4 in cluster kcp-upgrade-a688i1 in namespace kcp-upgrade-daxuee

Sep 27 20:26:10.626: INFO: INFO: Collecting boot logs for AzureMachine kcp-upgrade-a688i1-md-0-twwj4

Failed to get logs for machine kcp-upgrade-a688i1-md-0-546788f59b-chbpw, cluster kcp-upgrade-daxuee/kcp-upgrade-a688i1: dialing public load balancer at kcp-upgrade-a688i1-faaeac3d.northeurope.cloudapp.azure.com: dial tcp 40.115.104.196:22: connect: connection timed out
STEP: Dumping workload cluster kcp-upgrade-daxuee/kcp-upgrade-a688i1 kube-system pod logs
STEP: Fetching kube-system pod logs took 918.259163ms
STEP: Dumping workload cluster kcp-upgrade-daxuee/kcp-upgrade-a688i1 Azure activity log
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-hd49l, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-a688i1-control-plane-zhd4m, container etcd
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-upgrade-a688i1-control-plane-zhd4m, container kube-apiserver
... skipping 8 lines ...
STEP: Fetching activity logs took 1.034088878s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-daxuee" namespace
STEP: Deleting cluster kcp-upgrade-daxuee/kcp-upgrade-a688i1
STEP: Deleting cluster kcp-upgrade-a688i1
INFO: Waiting for the Cluster kcp-upgrade-daxuee/kcp-upgrade-a688i1 to be deleted
STEP: Waiting for cluster kcp-upgrade-a688i1 to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-9fr7m, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-a688i1-control-plane-zhd4m, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-kqjqd, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-a688i1-control-plane-zhd4m, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-twqsw, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-qxzx8, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-a688i1-control-plane-zhd4m, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-a688i1-control-plane-zhd4m, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-5vn5n, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-l7sts, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-hd49l, container calico-kube-controllers: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-daxuee
STEP: Redacting sensitive information from logs


• [SLOW TEST:1595.331 seconds]
... skipping 57 lines ...
Sep 27 20:40:36.415: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-62qgpz-md-0-kscklj-gs8br

Sep 27 20:40:37.116: INFO: INFO: Collecting logs for node md-rollout-62qgpz-md-0-59nhr in cluster md-rollout-62qgpz in namespace md-rollout-z2x6mk

Sep 27 20:42:50.674: INFO: INFO: Collecting boot logs for AzureMachine md-rollout-62qgpz-md-0-59nhr

Failed to get logs for machine md-rollout-62qgpz-md-0-7b9fc7576c-lm85m, cluster md-rollout-z2x6mk/md-rollout-62qgpz: [dialing from control plane to target node at md-rollout-62qgpz-md-0-59nhr: ssh: rejected: connect failed (Connection timed out), failed to get boot diagnostics data: compute.VirtualMachinesClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ResourceNotFound" Message="The Resource 'Microsoft.Compute/virtualMachines/md-rollout-62qgpz-md-0-59nhr' under resource group 'capz-e2e-kl1lqx' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"]
STEP: Dumping workload cluster md-rollout-z2x6mk/md-rollout-62qgpz kube-system pod logs
STEP: Fetching kube-system pod logs took 974.835794ms
STEP: Dumping workload cluster md-rollout-z2x6mk/md-rollout-62qgpz Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-n7lzq, container coredns
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-jvzl9, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/calico-node-88jzw, container calico-node
... skipping 8 lines ...
STEP: Fetching activity logs took 1.518870943s
STEP: Dumping all the Cluster API resources in the "md-rollout-z2x6mk" namespace
STEP: Deleting cluster md-rollout-z2x6mk/md-rollout-62qgpz
STEP: Deleting cluster md-rollout-62qgpz
INFO: Waiting for the Cluster md-rollout-z2x6mk/md-rollout-62qgpz to be deleted
STEP: Waiting for cluster md-rollout-62qgpz to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-rollout-62qgpz-control-plane-76kqb, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-n7lzq, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-9mlvf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-88jzw, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-rollout-62qgpz-control-plane-76kqb, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-jvzl9, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-md-rollout-62qgpz-control-plane-76kqb, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-md-rollout-62qgpz-control-plane-76kqb, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-25qnb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-fr2cm, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-2h2dq, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-rollout" test spec
INFO: Deleting namespace md-rollout-z2x6mk
STEP: Redacting sensitive information from logs


• [SLOW TEST:1082.549 seconds]
... skipping 92 lines ...
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-y0pg7z-control-plane-5wq2s, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-h8lw6, container coredns
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-upgrade-y0pg7z-control-plane-hp8bv, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-69d6t, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-8p7hm, container calico-node
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-y0pg7z-control-plane-hp8bv, container kube-scheduler
STEP: Got error while iterating over activity logs for resource group capz-e2e-bvhmpl: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.001261946s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-jq3u8h" namespace
STEP: Deleting cluster kcp-upgrade-jq3u8h/kcp-upgrade-y0pg7z
STEP: Deleting cluster kcp-upgrade-y0pg7z
INFO: Waiting for the Cluster kcp-upgrade-jq3u8h/kcp-upgrade-y0pg7z to be deleted
STEP: Waiting for cluster kcp-upgrade-y0pg7z to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-w7mr8, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-h8lw6, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-y0pg7z-control-plane-7dl4q, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-8p7hm, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-y0pg7z-control-plane-5wq2s, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-7p9cc, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vc8lr, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-88xk2, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-2s78p, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-y0pg7z-control-plane-5wq2s, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-78l4k, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-69d6t, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-8jr9c, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-y0pg7z-control-plane-5wq2s, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-y0pg7z-control-plane-5wq2s, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-y0pg7z-control-plane-hp8bv, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-kcp-upgrade-y0pg7z-control-plane-hp8bv, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-y0pg7z-control-plane-hp8bv, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-fjfwp, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-kcp-upgrade-y0pg7z-control-plane-7dl4q, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-y0pg7z-control-plane-hp8bv, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-kcp-upgrade-y0pg7z-control-plane-7dl4q, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-kcp-upgrade-y0pg7z-control-plane-7dl4q, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
INFO: Deleting namespace kcp-upgrade-jq3u8h
STEP: Redacting sensitive information from logs


• [SLOW TEST:2724.141 seconds]
... skipping 91 lines ...
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-hsmkh, container coredns
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-gqzei6-control-plane-ztklz, container kube-scheduler
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-gqzei6-control-plane-v64mb, container etcd
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-upgrade-gqzei6-control-plane-2s47f, container kube-scheduler
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-kdwb7, container coredns
STEP: Creating log watcher for controller kube-system/etcd-kcp-upgrade-gqzei6-control-plane-ztklz, container etcd
STEP: Got error while iterating over activity logs for resource group capz-e2e-45z3jw: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.00055859s
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-eoto5h" namespace
STEP: Deleting cluster kcp-upgrade-eoto5h/kcp-upgrade-gqzei6
STEP: Deleting cluster kcp-upgrade-gqzei6
INFO: Waiting for the Cluster kcp-upgrade-eoto5h/kcp-upgrade-gqzei6 to be deleted
STEP: Waiting for cluster kcp-upgrade-gqzei6 to be deleted
... skipping 13 lines ...
Running the Cluster API E2E tests Running the self-hosted spec 
  Should pivot the bootstrap cluster to a self-hosted cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:103

STEP: Creating namespace "self-hosted" for hosting the cluster
Sep 27 20:50:02.813: INFO: starting to create namespace for hosting the "self-hosted" test spec
2021/09/27 20:50:02 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found
INFO: Creating namespace self-hosted
INFO: Creating event watcher for namespace "self-hosted"
STEP: Creating a workload cluster
INFO: Creating the workload cluster with name "self-hosted-unnna2" using the "(default)" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
INFO: clusterctl config cluster self-hosted-unnna2 --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 1 --worker-machine-count 1 --flavor (default)
... skipping 75 lines ...
STEP: Fetching activity logs took 513.569601ms
STEP: Dumping all the Cluster API resources in the "self-hosted" namespace
STEP: Deleting all clusters in the self-hosted namespace
STEP: Deleting cluster self-hosted-unnna2
INFO: Waiting for the Cluster self-hosted/self-hosted-unnna2 to be deleted
STEP: Waiting for cluster self-hosted-unnna2 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-self-hosted-unnna2-control-plane-7ntt6, container kube-scheduler: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-bfcd78f99-8b6t7, container manager: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-54f94494bd-wm4qf, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-self-hosted-unnna2-control-plane-7ntt6, container kube-controller-manager: http2: client connection lost
INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-5f7c498f8d-bntsl, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-cjvps, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-self-hosted-unnna2-control-plane-7ntt6, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-sqw9c, container kube-proxy: http2: client connection lost
INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-66b74b44bd-qkgrw, container manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-self-hosted-unnna2-control-plane-7ntt6, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-97gqm, container kube-proxy: http2: client connection lost
INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-5f7c498f8d-bntsl, container kube-rbac-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-rp62f, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-rhn4n, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-dtvds, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-r8wv4, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "self-hosted" test spec
INFO: Deleting namespace self-hosted
STEP: Checking if any resources are left over in Azure for spec "self-hosted"
STEP: Redacting sensitive information from logs
STEP: Redacting sensitive information from logs

... skipping 76 lines ...
STEP: Fetching activity logs took 519.018992ms
STEP: Dumping all the Cluster API resources in the "mhc-remediation-3zg46y" namespace
STEP: Deleting cluster mhc-remediation-3zg46y/mhc-remediation-5i2a4c
STEP: Deleting cluster mhc-remediation-5i2a4c
INFO: Waiting for the Cluster mhc-remediation-3zg46y/mhc-remediation-5i2a4c to be deleted
STEP: Waiting for cluster mhc-remediation-5i2a4c to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-d27bp, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-5i2a4c-control-plane-chr2h, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-5i2a4c-control-plane-chr2h, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-xll9d, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-cqfnv, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-4zlpj, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-5i2a4c-control-plane-chr2h, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-rfm6v, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-5i2a4c-control-plane-chr2h, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-b4kl2, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-qbw2g, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
INFO: Deleting namespace mhc-remediation-3zg46y
STEP: Redacting sensitive information from logs


• [SLOW TEST:1226.872 seconds]
... skipping 165 lines ...
STEP: Fetching activity logs took 2.010796682s
STEP: Dumping all the Cluster API resources in the "mhc-remediation-dsr4x3" namespace
STEP: Deleting cluster mhc-remediation-dsr4x3/mhc-remediation-qa7c9u
STEP: Deleting cluster mhc-remediation-qa7c9u
INFO: Waiting for the Cluster mhc-remediation-dsr4x3/mhc-remediation-qa7c9u to be deleted
STEP: Waiting for cluster mhc-remediation-qa7c9u to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-qa7c9u-control-plane-wfpvq, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-8pqdr, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-kp472, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-qa7c9u-control-plane-wfpvq, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-qa7c9u-control-plane-wfpvq, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-fnchq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-tmksd, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-5h5xg, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-mhc-remediation-qa7c9u-control-plane-q8gtz, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-qa7c9u-control-plane-q8gtz, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-gfb4x, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-mhc-remediation-qa7c9u-control-plane-q8gtz, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-mhc-remediation-qa7c9u-control-plane-q8gtz, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-mhc-remediation-qa7c9u-control-plane-wfpvq, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-brh77, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
INFO: Deleting namespace mhc-remediation-dsr4x3
STEP: Redacting sensitive information from logs


• Failure [1558.439 seconds]
Running the Cluster API E2E tests
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:40
  Should successfully remediate unhealthy machines with MachineHealthCheck
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:169
    Should successfully trigger KCP remediation [It]
    /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.3/e2e/mhc_remediations.go:115

    Failed to get controller-runtime client
    Unexpected error:
        <*url.Error | 0xc00157be30>: {
            Op: "Get",
            URL: "https://mhc-remediation-qa7c9u-7959f1f0.northeurope.cloudapp.azure.com:6443/api?timeout=32s",
            Err: <*http.httpError | 0xc0050bdb00>{
                err: "net/http: request canceled (Client.Timeout exceeded while awaiting headers)",
                timeout: true,
            },
... skipping 121 lines ...
STEP: Creating log watcher for controller kube-system/kube-controller-manager-machine-pool-c5xqdu-control-plane-lt95d, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-node-t4m6v, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-2lpw5, container coredns
STEP: Creating log watcher for controller kube-system/kube-apiserver-machine-pool-c5xqdu-control-plane-lt95d, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-scheduler-machine-pool-c5xqdu-control-plane-lt95d, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-w44vx, container kube-proxy
STEP: Error starting logs stream for pod kube-system/calico-node-dst4z, container calico-node: pods "machine-pool-c5xqdu-mp-0000001" not found
STEP: Error starting logs stream for pod kube-system/calico-node-4dztx, container calico-node: pods "machine-pool-c5xqdu-mp-0000000" not found
STEP: Error starting logs stream for pod kube-system/kube-proxy-w44vx, container kube-proxy: pods "machine-pool-c5xqdu-mp-0000001" not found
STEP: Error starting logs stream for pod kube-system/kube-proxy-r59p9, container kube-proxy: pods "machine-pool-c5xqdu-mp-0000000" not found
STEP: Fetching activity logs took 496.670466ms
STEP: Dumping all the Cluster API resources in the "machine-pool-8n01vk" namespace
STEP: Deleting cluster machine-pool-8n01vk/machine-pool-c5xqdu
STEP: Deleting cluster machine-pool-c5xqdu
INFO: Waiting for the Cluster machine-pool-8n01vk/machine-pool-c5xqdu to be deleted
STEP: Waiting for cluster machine-pool-c5xqdu to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-machine-pool-c5xqdu-control-plane-lt95d, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-2lpw5, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-558bd4d5db-n85z5, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-hn5fg, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-t4m6v, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-machine-pool-c5xqdu-control-plane-lt95d, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-kzxbb, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-machine-pool-c5xqdu-control-plane-lt95d, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-p9vdk, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-machine-pool-c5xqdu-control-plane-lt95d, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-2ppmf, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "machine-pool" test spec
INFO: Deleting namespace machine-pool-8n01vk
STEP: Redacting sensitive information from logs


• [SLOW TEST:1219.116 seconds]
... skipping 75 lines ...
STEP: Fetching activity logs took 554.321867ms
STEP: Dumping all the Cluster API resources in the "md-scale-27mb3g" namespace
STEP: Deleting cluster md-scale-27mb3g/md-scale-5ghmf5
STEP: Deleting cluster md-scale-5ghmf5
INFO: Waiting for the Cluster md-scale-27mb3g/md-scale-5ghmf5 to be deleted
STEP: Waiting for cluster md-scale-5ghmf5 to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-p5ldl, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-md-scale-5ghmf5-control-plane-2lzsz, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-sp9cd, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-kl5pc, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-jlpcm, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-md-scale-5ghmf5-control-plane-2lzsz, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "md-scale" test spec
INFO: Deleting namespace md-scale-27mb3g
STEP: Redacting sensitive information from logs


• [SLOW TEST:1228.321 seconds]
... skipping 68 lines ...
STEP: Creating log watcher for controller kube-system/kube-apiserver-node-drain-q2e1t3-control-plane-m5lfx, container kube-apiserver
STEP: Creating log watcher for controller kube-system/calico-node-d5m84, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-7tcnv, container coredns
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-fbqbr, container coredns
STEP: Creating log watcher for controller kube-system/kube-controller-manager-node-drain-q2e1t3-control-plane-m5lfx, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-scheduler-node-drain-q2e1t3-control-plane-m5lfx, container kube-scheduler
STEP: Got error while iterating over activity logs for resource group capz-e2e-h76lrm: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.00043056s
STEP: Dumping all the Cluster API resources in the "node-drain-4qzuq2" namespace
STEP: Deleting cluster node-drain-4qzuq2/node-drain-q2e1t3
STEP: Deleting cluster node-drain-q2e1t3
INFO: Waiting for the Cluster node-drain-4qzuq2/node-drain-q2e1t3 to be deleted
STEP: Waiting for cluster node-drain-q2e1t3 to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-d5m84, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-fbqbr, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-node-drain-q2e1t3-control-plane-m5lfx, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-w84m4, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-node-drain-q2e1t3-control-plane-m5lfx, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-node-drain-q2e1t3-control-plane-m5lfx, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-node-drain-q2e1t3-control-plane-m5lfx, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-7tcnv, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-b42m5, container calico-kube-controllers: http2: client connection lost
STEP: Deleting namespace used for hosting the "node-drain" test spec
INFO: Deleting namespace node-drain-4qzuq2
STEP: Redacting sensitive information from logs


• [SLOW TEST:2070.632 seconds]
... skipping 7 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck [It] Should successfully trigger KCP remediation 
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.3/framework/cluster_proxy.go:171

Ran 12 of 22 Specs in 7079.593 seconds
FAIL! -- 11 Passed | 1 Failed | 0 Pending | 10 Skipped


Ginkgo ran 1 suite in 1h59m3.847590149s
Test Suite Failed
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...