This job view page is being replaced by Spyglass soon. Check out the new job view.
PRnprokopic: Fix AKS cluster provisioning errors
ResultFAILURE
Tests 1 failed / 4 succeeded
Started2021-03-02 17:54
Elapsed59m10s
Revision96812e741f85244df749c340fbf728744a863a43
Refs 1205

Test Failures


capz-e2e Workload cluster creation With 3 control-plane nodes and 2 worker nodes 37m0s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sWith\s3\scontrol\-plane\snodes\sand\s2\sworker\snodes$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:143
Timed out after 300.051s.
Deployment default/web failed
Deployment:
{
  "metadata": {
    "name": "web",
    "namespace": "default",
    "selfLink": "/apis/apps/v1/namespaces/default/deployments/web",
    "uid": "130d0707-5457-4ce0-8bc6-a007c01a9f2c",
    "resourceVersion": "2772",
    "generation": 1,
    "creationTimestamp": "2021-03-02T18:24:03Z",
    "annotations": {
      "deployment.kubernetes.io/revision": "1"
    },
    "managedFields": [
      {
        "manager": "cluster-api-e2e",
        "operation": "Update",
        "apiVersion": "apps/v1",
        "time": "2021-03-02T18:24:03Z",
        "fieldsType": "FieldsV1",
        "fieldsV1": {
          "f:spec": {
            "f:progressDeadlineSeconds": {},
            "f:replicas": {},
            "f:revisionHistoryLimit": {},
            "f:selector": {
              "f:matchLabels": {
                ".": {},
                "f:app": {}
              }
            },
            "f:strategy": {
              "f:rollingUpdate": {
                ".": {},
                "f:maxSurge": {},
                "f:maxUnavailable": {}
              },
              "f:type": {}
            },
            "f:template": {
              "f:metadata": {
                "f:labels": {
                  ".": {},
                  "f:app": {}
                }
              },
              "f:spec": {
                "f:containers": {
                  "k:{\"name\":\"web\"}": {
                    ".": {},
                    "f:image": {},
                    "f:imagePullPolicy": {},
                    "f:name": {},
                    "f:resources": {
                      ".": {},
                      "f:requests": {
                        ".": {},
                        "f:cpu": {},
                        "f:memory": {}
                      }
                    },
                    "f:terminationMessagePath": {},
                    "f:terminationMessagePolicy": {}
                  }
                },
                "f:dnsPolicy": {},
                "f:nodeSelector": {
                  ".": {},
                  "f:kubernetes.io/os": {}
                },
                "f:restartPolicy": {},
                "f:schedulerName": {},
                "f:securityContext": {},
                "f:terminationGracePeriodSeconds": {}
              }
            }
          }
        }
      },
      {
        "manager": "kube-controller-manager",
        "operation": "Update",
        "apiVersion": "apps/v1",
        "time": "2021-03-02T18:24:03Z",
        "fieldsType": "FieldsV1",
        "fieldsV1": {
          "f:metadata": {
            "f:annotations": {
              ".": {},
              "f:deployment.kubernetes.io/revision": {}
            }
          },
          "f:status": {
            "f:conditions": {
              ".": {},
              "k:{\"type\":\"Available\"}": {
                ".": {},
                "f:lastTransitionTime": {},
                "f:lastUpdateTime": {},
                "f:message": {},
                "f:reason": {},
                "f:status": {},
                "f:type": {}
              },
              "k:{\"type\":\"Progressing\"}": {
                ".": {},
                "f:lastTransitionTime": {},
                "f:lastUpdateTime": {},
                "f:message": {},
                "f:reason": {},
                "f:status": {},
                "f:type": {}
              }
            },
            "f:observedGeneration": {},
            "f:replicas": {},
            "f:unavailableReplicas": {},
            "f:updatedReplicas": {}
          }
        }
      }
    ]
  },
  "spec": {
    "replicas": 1,
    "selector": {
      "matchLabels": {
        "app": "web"
      }
    },
    "template": {
      "metadata": {
        "creationTimestamp": null,
        "labels": {
          "app": "web"
        }
      },
      "spec": {
        "containers": [
          {
            "name": "web",
            "image": "httpd",
            "resources": {
              "requests": {
                "cpu": "10m",
                "memory": "10M"
              }
            },
            "terminationMessagePath": "/dev/termination-log",
            "terminationMessagePolicy": "File",
            "imagePullPolicy": "Always"
          }
        ],
        "restartPolicy": "Always",
        "terminationGracePeriodSeconds": 30,
        "dnsPolicy": "ClusterFirst",
        "nodeSelector": {
          "kubernetes.io/os": "linux"
        },
        "securityContext": {},
        "schedulerName": "default-scheduler"
      }
    },
    "strategy": {
      "type": "RollingUpdate",
      "rollingUpdate": {
        "maxUnavailable": "25%",
        "maxSurge": "25%"
      }
    },
    "revisionHistoryLimit": 10,
    "progressDeadlineSeconds": 600
  },
  "status": {
    "observedGeneration": 1,
    "replicas": 1,
    "updatedReplicas": 1,
    "unavailableReplicas": 1,
    "conditions": [
      {
        "type": "Available",
        "status": "False",
        "lastUpdateTime": "2021-03-02T18:24:03Z",
        "lastTransitionTime": "2021-03-02T18:24:03Z",
        "reason": "MinimumReplicasUnavailable",
        "message": "Deployment does not have minimum availability."
      },
      {
        "type": "Progressing",
        "status": "True",
        "lastUpdateTime": "2021-03-02T18:24:03Z",
        "lastTransitionTime": "2021-03-02T18:24:03Z",
        "reason": "ReplicaSetUpdated",
        "message": "ReplicaSet \"web-5d45b7f96d\" is progressing."
      }
    ]
  }
}
LAST SEEN                      TYPE    REASON             OBJECT          MESSAGE
2021-03-02 18:24:03 +0000 UTC  Normal  ScalingReplicaSet  deployment/web  Scaled up replica set web-5d45b7f96d to 1

Expected
    <bool>: false
to be true
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:93
				
				Click to see stdout/stderrfrom junit.e2e_suite.3.xml

Filter through log files | View test history on testgrid


Show 4 Passed Tests

Show 13 Skipped Tests

Error lines from build-log.txt

... skipping 454 lines ...
STEP: Fetching activity logs took 1.187343983s
STEP: Dumping all the Cluster API resources in the "create-workload-cluster-n3z099" namespace
STEP: Deleting all clusters in the create-workload-cluster-n3z099 namespace
STEP: Deleting cluster capz-e2e-d0jn00
INFO: Waiting for the Cluster create-workload-cluster-n3z099/capz-e2e-d0jn00 to be deleted
STEP: Waiting for cluster capz-e2e-d0jn00 to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-d0jn00-control-plane-bpgtd, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-d0jn00-control-plane-t4j5k, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-8g89m, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-8xsxc, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5r2mn, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-d0jn00-control-plane-ppksh, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-d0jn00-control-plane-t4j5k, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-xcxl9, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-x784v, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-42rw5, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-ccnq8, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-d0jn00-control-plane-ppksh, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-d0jn00-control-plane-bpgtd, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-d0jn00-control-plane-t4j5k, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-d0jn00-control-plane-t4j5k, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-8f59968d4-xr8q8, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-wrqcq, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-d0jn00-control-plane-bpgtd, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-78bhs, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-br7zg, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-d0jn00-control-plane-ppksh, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-d0jn00-control-plane-ppksh, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-d0jn00-control-plane-bpgtd, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace create-workload-cluster-n3z099
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 25m28s on Ginkgo node 2 of 3


... skipping 118 lines ...
STEP: Fetching activity logs took 1.40621958s
STEP: Dumping all the Cluster API resources in the "create-workload-cluster-exzdjn" namespace
STEP: Deleting all clusters in the create-workload-cluster-exzdjn namespace
STEP: Deleting cluster capz-e2e-ktasnz
INFO: Waiting for the Cluster create-workload-cluster-exzdjn/capz-e2e-ktasnz to be deleted
STEP: Waiting for cluster capz-e2e-ktasnz to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-lk2sn, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-ktasnz-control-plane-x59w9, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-ktasnz-control-plane-x59w9, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-jfmgw, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-7khgc, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-ktasnz-control-plane-x59w9, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-gnwcx, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-8f59968d4-zcmrl, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-ktasnz-control-plane-x59w9, container kube-scheduler: http2: client connection lost
W0302 18:30:59.220682   19233 reflector.go:436] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I0302 18:31:30.496173   19233 trace.go:205] Trace[326081223]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167 (02-Mar-2021 18:31:00.495) (total time: 30001ms):
Trace[326081223]: [30.001047596s] [30.001047596s] END
E0302 18:31:30.496266   19233 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-exzdjn/events?resourceVersion=6051": dial tcp 52.167.65.62:6443: i/o timeout
I0302 18:32:02.219861   19233 trace.go:205] Trace[1087694162]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167 (02-Mar-2021 18:31:32.218) (total time: 30001ms):
Trace[1087694162]: [30.001285681s] [30.001285681s] END
E0302 18:32:02.219989   19233 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-exzdjn/events?resourceVersion=6051": dial tcp 52.167.65.62:6443: i/o timeout
I0302 18:32:35.960673   19233 trace.go:205] Trace[859397807]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167 (02-Mar-2021 18:32:05.959) (total time: 30000ms):
Trace[859397807]: [30.000691354s] [30.000691354s] END
E0302 18:32:35.960743   19233 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-exzdjn/events?resourceVersion=6051": dial tcp 52.167.65.62:6443: i/o timeout
I0302 18:33:18.123542   19233 trace.go:205] Trace[1793371757]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167 (02-Mar-2021 18:32:48.122) (total time: 30000ms):
Trace[1793371757]: [30.000808087s] [30.000808087s] END
E0302 18:33:18.123625   19233 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-exzdjn/events?resourceVersion=6051": dial tcp 52.167.65.62:6443: i/o timeout
I0302 18:34:08.980529   19233 trace.go:205] Trace[49627420]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167 (02-Mar-2021 18:33:38.979) (total time: 30001ms):
Trace[49627420]: [30.001059691s] [30.001059691s] END
E0302 18:34:08.980656   19233 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-exzdjn/events?resourceVersion=6051": dial tcp 52.167.65.62:6443: i/o timeout
I0302 18:35:07.765813   19233 trace.go:205] Trace[389963374]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167 (02-Mar-2021 18:34:37.764) (total time: 30000ms):
Trace[389963374]: [30.000768262s] [30.000768262s] END
E0302 18:35:07.765891   19233 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-exzdjn/events?resourceVersion=6051": dial tcp 52.167.65.62:6443: i/o timeout
I0302 18:36:36.050120   19233 trace.go:205] Trace[552349023]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167 (02-Mar-2021 18:36:06.048) (total time: 30001ms):
Trace[552349023]: [30.001251291s] [30.001251291s] END
E0302 18:36:36.050208   19233 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-exzdjn/events?resourceVersion=6051": dial tcp 52.167.65.62:6443: i/o timeout
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace create-workload-cluster-exzdjn
STEP: Redacting sensitive information from logs
E0302 18:37:13.577986   19233 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-exzdjn/events?resourceVersion=6051": dial tcp: lookup capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 36m30s on Ginkgo node 1 of 3


• [SLOW TEST:2190.205 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:39
... skipping 81 lines ...
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-t2tcmj-control-plane-fn6dr, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-csg6c, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-t2tcmj-control-plane-fn6dr, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-t2tcmj-control-plane-ng2n2, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-rxjwt, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-dqcgl, container kube-proxy
STEP: Error starting logs stream for pod kube-system/calico-node-qd6hp, container calico-node: the server could not find the requested resource ( pods/log calico-node-qd6hp)
STEP: Error starting logs stream for pod kube-system/calico-node-lmsxh, container calico-node: the server could not find the requested resource ( pods/log calico-node-lmsxh)
STEP: Error starting logs stream for pod kube-system/kube-proxy-jbjgt, container kube-proxy: the server could not find the requested resource ( pods/log kube-proxy-jbjgt)
STEP: Error starting logs stream for pod kube-system/kube-proxy-hzq69, container kube-proxy: the server could not find the requested resource ( pods/log kube-proxy-hzq69)
STEP: Got error while iterating over activity logs for resource group capz-e2e-t2tcmj: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000556183s
STEP: Dumping all the Cluster API resources in the "create-workload-cluster-lhtgwl" namespace
STEP: Deleting all clusters in the create-workload-cluster-lhtgwl namespace
STEP: Deleting cluster capz-e2e-t2tcmj
INFO: Waiting for the Cluster create-workload-cluster-lhtgwl/capz-e2e-t2tcmj to be deleted
STEP: Waiting for cluster capz-e2e-t2tcmj to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-t2tcmj-control-plane-ng2n2, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-t2tcmj-control-plane-74qfv, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-ldng2, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-t2tcmj-control-plane-74qfv, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-mjkgm, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-csg6c, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-t2tcmj-control-plane-74qfv, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-t2tcmj-control-plane-fn6dr, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-dqcgl, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-t2tcmj-control-plane-fn6dr, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-8f59968d4-855s6, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-t2tcmj-control-plane-ng2n2, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-x2fm9, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-mtkmx, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-t2tcmj-control-plane-ng2n2, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-t2tcmj-control-plane-fn6dr, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-t2tcmj-control-plane-74qfv, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rxjwt, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-w9zkx, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-t2tcmj-control-plane-fn6dr, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-t2tcmj-control-plane-ng2n2, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-vs9b8, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace create-workload-cluster-lhtgwl
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 worker nodes" ran for 37m1s on Ginkgo node 3 of 3


• Failure [2220.966 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:39
  With 3 control-plane nodes and 2 worker nodes [It]
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:143

  Timed out after 300.051s.
  Deployment default/web failed
  Deployment:
  {
    "metadata": {
      "name": "web",
      "namespace": "default",
      "selfLink": "/apis/apps/v1/namespaces/default/deployments/web",
... skipping 348 lines ...
azureclusteridentity.infrastructure.cluster.x-k8s.io/multi-tenancy-identity created
configmap/cni-capz-e2e-6atvto-crs-0 created
clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-6atvto-crs-0 created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E0302 18:37:59.288267   19233 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-exzdjn/events?resourceVersion=6051": dial tcp: lookup capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
INFO: Waiting for the first control plane machine managed by create-workload-cluster-n3zbe3/capz-e2e-6atvto-control-plane to be provisioned
STEP: Waiting for one control plane node to exist
E0302 18:38:42.588633   19233 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-exzdjn/events?resourceVersion=6051": dial tcp: lookup capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0302 18:39:31.171075   19233 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-exzdjn/events?resourceVersion=6051": dial tcp: lookup capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0302 18:40:28.617836   19233 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-exzdjn/events?resourceVersion=6051": dial tcp: lookup capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0302 18:41:03.944777   19233 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-exzdjn/events?resourceVersion=6051": dial tcp: lookup capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0302 18:42:01.919268   19233 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-exzdjn/events?resourceVersion=6051": dial tcp: lookup capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be ready
INFO: Waiting for control plane create-workload-cluster-n3zbe3/capz-e2e-6atvto-control-plane to be ready (implies underlying nodes to be ready as well)
STEP: Waiting for the control plane to be ready
INFO: Waiting for the machine deployments to be provisioned
STEP: Waiting for the workload nodes to exist
E0302 18:42:35.722073   19233 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-exzdjn/events?resourceVersion=6051": dial tcp: lookup capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0302 18:43:14.096535   19233 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-exzdjn/events?resourceVersion=6051": dial tcp: lookup capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0302 18:43:46.501668   19233 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-exzdjn/events?resourceVersion=6051": dial tcp: lookup capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0302 18:44:18.221605   19233 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-exzdjn/events?resourceVersion=6051": dial tcp: lookup capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for the machine pools to be provisioned
STEP: creating Azure clients with the workload cluster's subscription
STEP: Dumping logs from the "capz-e2e-6atvto" workload cluster
STEP: Dumping workload cluster create-workload-cluster-n3zbe3/capz-e2e-6atvto logs
E0302 18:45:08.199097   19233 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-exzdjn/events?resourceVersion=6051": dial tcp: lookup capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Dumping workload cluster create-workload-cluster-n3zbe3/capz-e2e-6atvto kube-system pod logs
STEP: Fetching kube-system pod logs took 399.286888ms
STEP: Dumping workload cluster create-workload-cluster-n3zbe3/capz-e2e-6atvto Azure activity log
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-8f59968d4-w8xsj, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-6atvto-control-plane-kjtbk, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-5hctt, container kube-proxy
... skipping 2 lines ...
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-6atvto-control-plane-kjtbk, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-6atvto-control-plane-kjtbk, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-6atvto-control-plane-kjtbk, container kube-scheduler
STEP: Creating log watcher for controller kube-system/coredns-f9fd979d6-p4lsk, container coredns
STEP: Creating log watcher for controller kube-system/coredns-f9fd979d6-sbnp8, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-8qb4q, container calico-node
STEP: Error starting logs stream for pod kube-system/calico-kube-controllers-8f59968d4-w8xsj, container calico-kube-controllers: container "calico-kube-controllers" in pod "calico-kube-controllers-8f59968d4-w8xsj" is waiting to start: ContainerCreating
STEP: Error starting logs stream for pod kube-system/coredns-f9fd979d6-sbnp8, container coredns: container "coredns" in pod "coredns-f9fd979d6-sbnp8" is waiting to start: ContainerCreating
STEP: Error starting logs stream for pod kube-system/coredns-f9fd979d6-p4lsk, container coredns: container "coredns" in pod "coredns-f9fd979d6-p4lsk" is waiting to start: ContainerCreating
STEP: Fetching activity logs took 505.615185ms
STEP: Dumping all the Cluster API resources in the "create-workload-cluster-n3zbe3" namespace
STEP: Deleting all clusters in the create-workload-cluster-n3zbe3 namespace
STEP: Deleting cluster capz-e2e-6atvto
INFO: Waiting for the Cluster create-workload-cluster-n3zbe3/capz-e2e-6atvto to be deleted
STEP: Waiting for cluster capz-e2e-6atvto to be deleted
E0302 18:45:46.516935   19233 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-exzdjn/events?resourceVersion=6051": dial tcp: lookup capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0302 18:46:22.443713   19233 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-exzdjn/events?resourceVersion=6051": dial tcp: lookup capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0302 18:46:54.602011   19233 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-exzdjn/events?resourceVersion=6051": dial tcp: lookup capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0302 18:47:52.599194   19233 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-exzdjn/events?resourceVersion=6051": dial tcp: lookup capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0302 18:48:35.430206   19233 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-exzdjn/events?resourceVersion=6051": dial tcp: lookup capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0302 18:49:16.502760   19233 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-exzdjn/events?resourceVersion=6051": dial tcp: lookup capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0302 18:50:07.512046   19233 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-exzdjn/events?resourceVersion=6051": dial tcp: lookup capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0302 18:51:06.103196   19233 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-exzdjn/events?resourceVersion=6051": dial tcp: lookup capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace create-workload-cluster-n3zbe3
STEP: Redacting sensitive information from logs
E0302 18:52:05.864509   19233 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/create-workload-cluster-exzdjn/events?resourceVersion=6051": dial tcp: lookup capz-e2e-ktasnz-756d6e3d.eastus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a single control plane node and 1 node" ran for 14m43s on Ginkgo node 1 of 3


• [SLOW TEST:883.360 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:39
... skipping 5 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Workload cluster creation [It] With 3 control-plane nodes and 2 worker nodes 
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:93

Ran 5 of 18 Specs in 3210.308 seconds
FAIL! -- 4 Passed | 1 Failed | 0 Pending | 13 Skipped


Ginkgo ran 1 suite in 54m43.212346577s
Test Suite Failed
make[1]: *** [Makefile:169: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:177: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...