Recent runs || View in Spyglass
PR | CecileRobertMichon: re-enable ILB test for VMSS |
Result | FAILURE |
Tests | 1 failed / 1 succeeded |
Started | |
Elapsed | 43m22s |
Revision | b99972892a98d054bb001c9e0a6f53d2ff9ad106 |
Refs |
1177 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\sWindows\senabled\sVMSS\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\san\sLinux\sAzureMachinePool\swith\s1\snodes\sand\sWindows\sAzureMachinePool\swith\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:467 Timed out after 300.114s. Deployment default/web-windows failed Deployment: { "metadata": { "name": "web-windows", "namespace": "default", "selfLink": "/apis/apps/v1/namespaces/default/deployments/web-windows", "uid": "47a6b018-58aa-4368-8ac7-6448a9c38051", "resourceVersion": "3228", "generation": 1, "creationTimestamp": "2021-03-04T21:52:06Z", "annotations": { "deployment.kubernetes.io/revision": "1" }, "managedFields": [ { "manager": "cluster-api-e2e", "operation": "Update", "apiVersion": "apps/v1", "time": "2021-03-04T21:52:06Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { "f:progressDeadlineSeconds": {}, "f:replicas": {}, "f:revisionHistoryLimit": {}, "f:selector": { "f:matchLabels": { ".": {}, "f:app": {} } }, "f:strategy": { "f:rollingUpdate": { ".": {}, "f:maxSurge": {}, "f:maxUnavailable": {} }, "f:type": {} }, "f:template": { "f:metadata": { "f:labels": { ".": {}, "f:app": {} } }, "f:spec": { "f:containers": { "k:{\"name\":\"web-windows\"}": { ".": {}, "f:image": {}, "f:imagePullPolicy": {}, "f:name": {}, "f:resources": { ".": {}, "f:requests": { ".": {}, "f:cpu": {}, "f:memory": {} } }, "f:terminationMessagePath": {}, "f:terminationMessagePolicy": {} } }, "f:dnsPolicy": {}, "f:nodeSelector": { ".": {}, "f:kubernetes.io/os": {} }, "f:restartPolicy": {}, "f:schedulerName": {}, "f:securityContext": {}, "f:terminationGracePeriodSeconds": {} } } } } }, { "manager": "kube-controller-manager", "operation": "Update", "apiVersion": "apps/v1", "time": "2021-03-04T21:52:07Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:annotations": { ".": {}, "f:deployment.kubernetes.io/revision": {} } }, "f:status": { "f:conditions": { ".": {}, "k:{\"type\":\"Available\"}": { ".": {}, "f:lastTransitionTime": {}, "f:lastUpdateTime": {}, "f:message": {}, "f:reason": {}, "f:status": {}, "f:type": {} }, "k:{\"type\":\"Progressing\"}": { ".": {}, "f:lastTransitionTime": {}, "f:lastUpdateTime": {}, "f:message": {}, "f:reason": {}, "f:status": {}, "f:type": {} } }, "f:observedGeneration": {}, "f:replicas": {}, "f:unavailableReplicas": {}, "f:updatedReplicas": {} } } } ] }, "spec": { "replicas": 1, "selector": { "matchLabels": { "app": "web-windows" } }, "template": { "metadata": { "creationTimestamp": null, "labels": { "app": "web-windows" } }, "spec": { "containers": [ { "name": "web-windows", "image": "k8sprow.azurecr.io/kubernetes-e2e-test-images/httpd:2.4.39-alpine", "resources": { "requests": { "cpu": "10m", "memory": "10M" } }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "imagePullPolicy": "IfNotPresent" } ], "restartPolicy": "Always", "terminationGracePeriodSeconds": 30, "dnsPolicy": "ClusterFirst", "nodeSelector": { "kubernetes.io/os": "windows" }, "securityContext": {}, "schedulerName": "default-scheduler" } }, "strategy": { "type": "RollingUpdate", "rollingUpdate": { "maxUnavailable": "25%", "maxSurge": "25%" } }, "revisionHistoryLimit": 10, "progressDeadlineSeconds": 600 }, "status": { "observedGeneration": 1, "replicas": 1, "updatedReplicas": 1, "unavailableReplicas": 1, "conditions": [ { "type": "Available", "status": "False", "lastUpdateTime": "2021-03-04T21:52:07Z", "lastTransitionTime": "2021-03-04T21:52:07Z", "reason": "MinimumReplicasUnavailable", "message": "Deployment does not have minimum availability." }, { "type": "Progressing", "status": "True", "lastUpdateTime": "2021-03-04T21:52:07Z", "lastTransitionTime": "2021-03-04T21:52:07Z", "reason": "ReplicaSetUpdated", "message": "ReplicaSet \"web-windows-58699f5dd4\" is progressing." } ] } } LAST SEEN TYPE REASON OBJECT MESSAGE 2021-03-04 21:52:07 +0000 UTC Normal ScalingReplicaSet deployment/web-windows Scaled up replica set web-windows-58699f5dd4 to 1 Expected <bool>: false to be true /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:93from junit.e2e_suite.3.xml
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Thu, 04 Mar 2021 21:30:39 UTC on Ginkgo node 3 of 3 �[1mSTEP�[0m: Creating a namespace for hosting the "create-workload-cluster" test spec INFO: Creating namespace create-workload-cluster-x558zo INFO: Creating event watcher for namespace "create-workload-cluster-x558zo" INFO: Creating the workload cluster with name "capz-e2e-52ddh5" using the "machine-pool-windows" template (Kubernetes v1.19.7, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-52ddh5 --infrastructure (default) --kubernetes-version v1.19.7 --control-plane-machine-count 1 --worker-machine-count 1 --flavor machine-pool-windows INFO: Applying the cluster template yaml to the cluster cluster.cluster.x-k8s.io/capz-e2e-52ddh5 created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-52ddh5 created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-52ddh5-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-52ddh5-control-plane created machinepool.exp.cluster.x-k8s.io/capz-e2e-52ddh5-mp-0 created azuremachinepool.exp.infrastructure.cluster.x-k8s.io/capz-e2e-52ddh5-mp-0 created kubeadmconfig.bootstrap.cluster.x-k8s.io/capz-e2e-52ddh5-mp-0 created machinepool.exp.cluster.x-k8s.io/capz-e2e-52ddh5-mp-win created azuremachinepool.exp.infrastructure.cluster.x-k8s.io/capz-e2e-52ddh5-mp-win created kubeadmconfig.bootstrap.cluster.x-k8s.io/capz-e2e-52ddh5-mp-win created configmap/cni-capz-e2e-52ddh5-crs-0 created clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-52ddh5-crs-0 created INFO: Waiting for the cluster infrastructure to be provisioned �[1mSTEP�[0m: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by create-workload-cluster-x558zo/capz-e2e-52ddh5-control-plane to be provisioned �[1mSTEP�[0m: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for control plane create-workload-cluster-x558zo/capz-e2e-52ddh5-control-plane to be ready (implies underlying nodes to be ready as well) �[1mSTEP�[0m: Waiting for the control plane to be ready INFO: Waiting for the machine deployments to be provisioned INFO: Waiting for the machine pools to be provisioned �[1mSTEP�[0m: Waiting for the machine pool workload nodes to exist �[1mSTEP�[0m: Waiting for the machine pool workload nodes to exist �[1mSTEP�[0m: creating a Kubernetes client to the workload cluster �[1mSTEP�[0m: creating an HTTP deployment �[1mSTEP�[0m: waiting for deployment default/web to be available �[1mSTEP�[0m: creating an internal Load Balancer service �[1mSTEP�[0m: waiting for service default/web-ilb to be available �[1mSTEP�[0m: connecting to the internal LB service from a curl pod �[1mSTEP�[0m: waiting for job default/curl-to-ilb-job1ugpl to be complete �[1mSTEP�[0m: deleting the ilb test resources �[1mSTEP�[0m: creating an external Load Balancer service �[1mSTEP�[0m: waiting for service default/web-elb to be available �[1mSTEP�[0m: connecting to the external LB service from a curl pod �[1mSTEP�[0m: waiting for job default/curl-to-elb-jobroiz4 to be complete �[1mSTEP�[0m: connecting directly to the external LB service 2021/03/04 21:52:06 [DEBUG] GET http://20.50.22.53 �[1mSTEP�[0m: deleting the test resources �[1mSTEP�[0m: creating a Kubernetes client to the workload cluster �[1mSTEP�[0m: creating an HTTP deployment �[1mSTEP�[0m: waiting for deployment default/web-windows to be available �[1mSTEP�[0m: Dumping logs from the "capz-e2e-52ddh5" workload cluster �[1mSTEP�[0m: Dumping workload cluster create-workload-cluster-x558zo/capz-e2e-52ddh5 logs �[1mSTEP�[0m: Dumping workload cluster create-workload-cluster-x558zo/capz-e2e-52ddh5 kube-system pod logs �[1mSTEP�[0m: Fetching kube-system pod logs took 790.376964ms �[1mSTEP�[0m: Dumping workload cluster create-workload-cluster-x558zo/capz-e2e-52ddh5 Azure activity log �[1mSTEP�[0m: Creating log watcher for controller kube-system/etcd-capz-e2e-52ddh5-control-plane-dl5st, container etcd �[1mSTEP�[0m: Creating log watcher for controller kube-system/coredns-f9fd979d6-zh6lg, container coredns �[1mSTEP�[0m: Creating log watcher for controller kube-system/coredns-f9fd979d6-zv7kp, container coredns �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-52ddh5-control-plane-dl5st, container kube-apiserver �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-52ddh5-control-plane-dl5st, container kube-controller-manager �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-ggzps, container kube-flannel �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-dkgn7, container kube-proxy �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-windows-27d9x, container kube-proxy �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-xtfjn, container kube-flannel �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-xvl4v, container kube-proxy �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-52ddh5-control-plane-dl5st, container kube-scheduler �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-flannel-ds-windows-amd64-dmtng, container kube-flannel �[1mSTEP�[0m: Fetching activity logs took 1.275588737s �[1mSTEP�[0m: Dumping all the Cluster API resources in the "create-workload-cluster-x558zo" namespace �[1mSTEP�[0m: Deleting all clusters in the create-workload-cluster-x558zo namespace �[1mSTEP�[0m: Deleting cluster capz-e2e-52ddh5 INFO: Waiting for the Cluster create-workload-cluster-x558zo/capz-e2e-52ddh5 to be deleted �[1mSTEP�[0m: Waiting for cluster capz-e2e-52ddh5 to be deleted �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-52ddh5-control-plane-dl5st, container kube-controller-manager: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-52ddh5-control-plane-dl5st, container kube-apiserver: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-52ddh5-control-plane-dl5st, container etcd: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-52ddh5-control-plane-dl5st, container kube-scheduler: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-proxy-xvl4v, container kube-proxy: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-xtfjn, container kube-flannel: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-zh6lg, container coredns: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-proxy-dkgn7, container kube-proxy: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-dmtng, container kube-flannel: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-ggzps, container kube-flannel: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-zv7kp, container coredns: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-proxy-windows-27d9x, container kube-proxy: http2: client connection lost �[1mSTEP�[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace create-workload-cluster-x558zo �[1mSTEP�[0m: Redacting sensitive information from logs INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 35m48s on Ginkgo node 3 of 3
Filter through log files | View test history on testgrid
capz-e2e Workload cluster creation Creating a Windows Enabled cluster With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
capz-e2e Conformance Tests conformance-tests
capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd in a HA cluster
capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd in a single control plane cluster
capz-e2e Running the Cluster API E2E tests Running the MachineDeployment upgrade spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capz-e2e Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e Running the Cluster API E2E tests Should adopt up-to-date control plane Machines without modification Should adopt up-to-date control plane Machines without modification
capz-e2e Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e Workload cluster creation Creating a GPU-enabled cluster with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating a VMSS cluster with a single control plane node and an AzureMachinePool with 2 nodes
capz-e2e Workload cluster creation Creating a cluster using a different SP identity with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating a ipv6 control-plane cluster With ipv6 worker node
capz-e2e Workload cluster creation Creating a private cluster Creates a public management cluster in the same vnet
capz-e2e Workload cluster creation With 3 control-plane nodes and 2 worker nodes
... skipping 429 lines ... [1mSTEP[0m: creating an external Load Balancer service [1mSTEP[0m: waiting for service default/web-elb to be available [1mSTEP[0m: connecting to the external LB service from a curl pod [1mSTEP[0m: waiting for job default/curl-to-elb-jobpzfam to be complete [1mSTEP[0m: connecting directly to the external LB service 2021/03/04 21:47:36 [DEBUG] GET http://51.138.37.122 2021/03/04 21:48:06 [ERR] GET http://51.138.37.122 request failed: Get "http://51.138.37.122": dial tcp 51.138.37.122:80: i/o timeout 2021/03/04 21:48:06 [DEBUG] GET http://51.138.37.122: retrying in 1s (4 left) [1mSTEP[0m: deleting the test resources [1mSTEP[0m: creating a Kubernetes client to the workload cluster [1mSTEP[0m: creating an HTTP deployment [1mSTEP[0m: waiting for deployment default/web-windows to be available [1mSTEP[0m: creating an internal Load Balancer service ... skipping 7 lines ... [1mSTEP[0m: waiting for job default/curl-to-elb-jobyq08z to be complete [1mSTEP[0m: connecting directly to the external LB service 2021/03/04 21:51:47 [DEBUG] GET http://51.124.52.117 [1mSTEP[0m: deleting the test resources [1mSTEP[0m: Dumping logs from the "capz-e2e-ibtuuf" workload cluster [1mSTEP[0m: Dumping workload cluster create-workload-cluster-iwglsi/capz-e2e-ibtuuf logs Failed to get logs for machine capz-e2e-ibtuuf-md-win-59fb5fbff5-jh8jd, cluster create-workload-cluster-iwglsi/capz-e2e-ibtuuf: dialing from control plane to target node at capz-e2e-ibtuuf-md-win-fpjzf: ssh: rejected: connect failed (Temporary failure in name resolution) [1mSTEP[0m: Dumping workload cluster create-workload-cluster-iwglsi/capz-e2e-ibtuuf kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 836.603092ms [1mSTEP[0m: Dumping workload cluster create-workload-cluster-iwglsi/capz-e2e-ibtuuf Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-f9fd979d6-np52c, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-f9fd979d6-tp9p7, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-hjr6x, container kube-flannel ... skipping 15 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-capz-e2e-ibtuuf-control-plane-5qgwf, container etcd [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-d759t, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-pld2q, container kube-flannel [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-sn6w7, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-zwzjq, container kube-flannel [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-dbn2t, container kube-proxy [1mSTEP[0m: Got error while iterating over activity logs for resource group capz-e2e-ibtuuf: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded [1mSTEP[0m: Fetching activity logs took 30.000732392s [1mSTEP[0m: Dumping all the Cluster API resources in the "create-workload-cluster-iwglsi" namespace [1mSTEP[0m: Deleting all clusters in the create-workload-cluster-iwglsi namespace [1mSTEP[0m: Deleting cluster capz-e2e-ibtuuf INFO: Waiting for the Cluster create-workload-cluster-iwglsi/capz-e2e-ibtuuf to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-ibtuuf to be deleted [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-ztnrj, container kube-flannel: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-ibtuuf-control-plane-ng2jq, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-np52c, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-ibtuuf-control-plane-5qgwf, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-ibtuuf-control-plane-ng2jq, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-ibtuuf-control-plane-ng2jq, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-sn6w7, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-zwzjq, container kube-flannel: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-ibtuuf-control-plane-pt4br, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-ibtuuf-control-plane-pt4br, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-ibtuuf-control-plane-5qgwf, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-ibtuuf-control-plane-5qgwf, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-mw922, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-windows-dbn2t, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-d759t, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-hjr6x, container kube-flannel: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-ibtuuf-control-plane-ng2jq, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-ibtuuf-control-plane-5qgwf, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-ibtuuf-control-plane-pt4br, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-7wcm4, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-tp9p7, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-ibtuuf-control-plane-pt4br, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-pld2q, container kube-flannel: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-8c778, container kube-flannel: http2: client connection lost [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace create-workload-cluster-iwglsi [1mSTEP[0m: Redacting sensitive information from logs INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 33m45s on Ginkgo node 1 of 3 ... skipping 80 lines ... [1mSTEP[0m: Fetching activity logs took 1.275588737s [1mSTEP[0m: Dumping all the Cluster API resources in the "create-workload-cluster-x558zo" namespace [1mSTEP[0m: Deleting all clusters in the create-workload-cluster-x558zo namespace [1mSTEP[0m: Deleting cluster capz-e2e-52ddh5 INFO: Waiting for the Cluster create-workload-cluster-x558zo/capz-e2e-52ddh5 to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-52ddh5 to be deleted [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-52ddh5-control-plane-dl5st, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-52ddh5-control-plane-dl5st, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-52ddh5-control-plane-dl5st, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-52ddh5-control-plane-dl5st, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-xvl4v, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-xtfjn, container kube-flannel: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-zh6lg, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-dkgn7, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-dmtng, container kube-flannel: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-ggzps, container kube-flannel: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-zv7kp, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-windows-27d9x, container kube-proxy: http2: client connection lost [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace create-workload-cluster-x558zo [1mSTEP[0m: Redacting sensitive information from logs INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 35m48s on Ginkgo node 3 of 3 ... skipping 3 lines ... Creating a Windows enabled VMSS cluster [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:466[0m [91m[1mwith a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node [It][0m [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:467[0m [91mTimed out after 300.114s. Deployment default/web-windows failed Deployment: { "metadata": { "name": "web-windows", "namespace": "default", "selfLink": "/apis/apps/v1/namespaces/default/deployments/web-windows", ... skipping 243 lines ... [1mSTEP[0m: Tearing down the management cluster [91m[1mSummarizing 1 Failure:[0m [91m[1m[Fail] [0m[90mWorkload cluster creation [0m[0mCreating a Windows enabled VMSS cluster [0m[91m[1m[It] with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node [0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:93[0m [1m[91mRan 2 of 18 Specs in 2311.413 seconds[0m [1m[91mFAIL![0m -- [32m[1m1 Passed[0m | [91m[1m1 Failed[0m | [33m[1m0 Pending[0m | [36m[1m16 Skipped[0m Ginkgo ran 1 suite in 39m37.209432194s Test Suite Failed make[1]: *** [Makefile:169: test-e2e-run] Error 1 make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make: *** [Makefile:177: test-e2e] Error 2 ================ REDACTING LOGS ================ All sensitive variables are redacted + EXIT_VALUE=2 + set +o xtrace Cleaning up after docker in docker. ================================================================================ ... skipping 5 lines ...