This job view page is being replaced by Spyglass soon. Check out the new job view.
PRCecileRobertMichon: Update kubectl hack/tools version to v1.22.4
ResultFAILURE
Tests 1 failed / 1 succeeded
Started2021-11-24 18:52
Elapsed32m23s
Revisionb344e7c502dd9a43974ee7c01f586fb9fe915641
Refs 1884

Test Failures


capz-e2e Workload cluster creation Creating a Windows Enabled cluster with dockershim With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node 23m56s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\sWindows\sEnabled\scluster\swith\sdockershim\sWith\s3\scontrol\-plane\snodes\sand\s1\sLinux\sworker\snode\sand\s1\sWindows\sworker\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:532
Timed out after 300.039s.
Service default/web9vutne-ilb failed
Service:
{
  "metadata": {
    "name": "web9vutne-ilb",
    "namespace": "default",
    "uid": "63123b46-8b38-40db-ade7-f8d435b4430e",
    "resourceVersion": "1301",
    "creationTimestamp": "2021-11-24T19:07:25Z",
    "annotations": {
      "service.beta.kubernetes.io/azure-load-balancer-internal": "true"
    },
    "finalizers": [
      "service.kubernetes.io/load-balancer-cleanup"
    ],
    "managedFields": [
      {
        "manager": "cluster-api-e2e",
        "operation": "Update",
        "apiVersion": "v1",
        "time": "2021-11-24T19:07:25Z",
        "fieldsType": "FieldsV1",
        "fieldsV1": {
          "f:metadata": {
            "f:annotations": {
              ".": {},
              "f:service.beta.kubernetes.io/azure-load-balancer-internal": {}
            }
          },
          "f:spec": {
            "f:allocateLoadBalancerNodePorts": {},
            "f:externalTrafficPolicy": {},
            "f:internalTrafficPolicy": {},
            "f:ports": {
              ".": {},
              "k:{\"port\":80,\"protocol\":\"TCP\"}": {
                ".": {},
                "f:name": {},
                "f:port": {},
                "f:protocol": {},
                "f:targetPort": {}
              },
              "k:{\"port\":443,\"protocol\":\"TCP\"}": {
                ".": {},
                "f:name": {},
                "f:port": {},
                "f:protocol": {},
                "f:targetPort": {}
              }
            },
            "f:selector": {},
            "f:sessionAffinity": {},
            "f:type": {}
          }
        }
      },
      {
        "manager": "kube-controller-manager",
        "operation": "Update",
        "apiVersion": "v1",
        "time": "2021-11-24T19:07:26Z",
        "fieldsType": "FieldsV1",
        "fieldsV1": {
          "f:metadata": {
            "f:finalizers": {
              ".": {},
              "v:\"service.kubernetes.io/load-balancer-cleanup\"": {}
            }
          }
        },
        "subresource": "status"
      }
    ]
  },
  "spec": {
    "ports": [
      {
        "name": "http",
        "protocol": "TCP",
        "port": 80,
        "targetPort": 80,
        "nodePort": 31010
      },
      {
        "name": "https",
        "protocol": "TCP",
        "port": 443,
        "targetPort": 443,
        "nodePort": 32584
      }
    ],
    "selector": {
      "app": "web9vutne"
    },
    "clusterIP": "10.101.92.191",
    "clusterIPs": [
      "10.101.92.191"
    ],
    "type": "LoadBalancer",
    "sessionAffinity": "None",
    "externalTrafficPolicy": "Cluster",
    "ipFamilies": [
      "IPv4"
    ],
    "ipFamilyPolicy": "SingleStack",
    "allocateLoadBalancerNodePorts": true,
    "internalTrafficPolicy": "Cluster"
  },
  "status": {
    "loadBalancer": {}
  }
}
LAST SEEN                      TYPE     REASON                  OBJECT                   MESSAGE
2021-11-24 19:07:25 +0000 UTC  Normal   EnsuringLoadBalancer    service/web9vutne-ilb    Ensuring load balancer
2021-11-24 19:07:26 +0000 UTC  Warning  FailedToCreateEndpoint  endpoints/web9vutne-ilb  Failed to create endpoint for service default/web9vutne-ilb: endpoints "web9vutne-ilb" already exists

Expected
    <bool>: false
to be true
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:214
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 1 Passed Tests

Show 22 Skipped Tests

Error lines from build-log.txt

... skipping 433 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:579

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Wed, 24 Nov 2021 19:00:24 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-dai27x" for hosting the cluster
Nov 24 19:00:24.268: INFO: starting to create namespace for hosting the "capz-e2e-dai27x" test spec
2021/11/24 19:00:24 failed trying to get namespace (capz-e2e-dai27x):namespaces "capz-e2e-dai27x" not found
INFO: Creating namespace capz-e2e-dai27x
INFO: Creating event watcher for namespace "capz-e2e-dai27x"
Nov 24 19:00:24.315: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-dai27x-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-dai27x-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.4, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 53 lines ...
STEP: waiting for job default/curl-to-elb-jobtjqn4i7qdro to be complete
Nov 24 19:10:47.154: INFO: waiting for job default/curl-to-elb-jobtjqn4i7qdro to be complete
Nov 24 19:10:57.223: INFO: job default/curl-to-elb-jobtjqn4i7qdro is complete, took 10.069205537s
STEP: connecting directly to the external LB service
Nov 24 19:10:57.223: INFO: starting attempts to connect directly to the external LB service
2021/11/24 19:10:57 [DEBUG] GET http://20.96.232.5
2021/11/24 19:11:27 [ERR] GET http://20.96.232.5 request failed: Get "http://20.96.232.5": dial tcp 20.96.232.5:80: i/o timeout
2021/11/24 19:11:27 [DEBUG] GET http://20.96.232.5: retrying in 1s (4 left)
Nov 24 19:11:28.285: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 24 19:11:28.285: INFO: starting to delete external LB service webw9su08-elb
Nov 24 19:11:28.348: INFO: starting to delete deployment webw9su08
Nov 24 19:11:28.384: INFO: starting to delete job curl-to-elb-jobtjqn4i7qdro
... skipping 65 lines ...
STEP: Fetching activity logs took 565.697782ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-dai27x" namespace
STEP: Deleting all clusters in the capz-e2e-dai27x namespace
STEP: Deleting cluster capz-e2e-dai27x-win-vmss
INFO: Waiting for the Cluster capz-e2e-dai27x/capz-e2e-dai27x-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-dai27x-win-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-dai27x-win-vmss-control-plane-897fl, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-5dpgp, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-mprq8, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-rmhxc, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-dai27x-win-vmss-control-plane-897fl, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-dai27x-win-vmss-control-plane-897fl, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-bx5l5, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-bvjwq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-dai27x-win-vmss-control-plane-897fl, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-bkb7c, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-5sspf, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-pwpxk, container kube-flannel: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-dai27x
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 22m38s on Ginkgo node 3 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:532

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Wed, 24 Nov 2021 19:00:23 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-iuoy8u" for hosting the cluster
Nov 24 19:00:23.672: INFO: starting to create namespace for hosting the "capz-e2e-iuoy8u" test spec
2021/11/24 19:00:23 failed trying to get namespace (capz-e2e-iuoy8u):namespaces "capz-e2e-iuoy8u" not found
INFO: Creating namespace capz-e2e-iuoy8u
INFO: Creating event watcher for namespace "capz-e2e-iuoy8u"
Nov 24 19:00:23.714: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-iuoy8u-win-ha
INFO: Creating the workload cluster with name "capz-e2e-iuoy8u-win-ha" using the "windows" template (Kubernetes v1.22.4, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 94 lines ...
STEP: Fetching activity logs took 556.788818ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-iuoy8u" namespace
STEP: Deleting all clusters in the capz-e2e-iuoy8u namespace
STEP: Deleting cluster capz-e2e-iuoy8u-win-ha
INFO: Waiting for the Cluster capz-e2e-iuoy8u/capz-e2e-iuoy8u-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-iuoy8u-win-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-iuoy8u-win-ha-control-plane-6268f, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-72dl5, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-iuoy8u-win-ha-control-plane-6268f, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-iuoy8u-win-ha-control-plane-nhkfv, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-iuoy8u-win-ha-control-plane-hx855, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-iuoy8u-win-ha-control-plane-nhkfv, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-n2f2w, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-7d977, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-iuoy8u-win-ha-control-plane-hx855, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-hfsbd, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-82tfv, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-8qxp9, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5tvf7, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-t7tzh, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-iuoy8u-win-ha-control-plane-hx855, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-iuoy8u-win-ha-control-plane-hx855, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-kdvxm, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-iuoy8u-win-ha-control-plane-nhkfv, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-iuoy8u-win-ha-control-plane-nhkfv, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-54wkd, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-iuoy8u-win-ha-control-plane-6268f, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-cx5sj, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-lr276, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-csktp, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-rsqvd, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-iuoy8u-win-ha-control-plane-6268f, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-iuoy8u
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 23m57s on Ginkgo node 1 of 3

... skipping 4 lines ...
  Creating a Windows Enabled cluster with dockershim
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:530
    With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node [It]
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:532

    Timed out after 300.039s.
    Service default/web9vutne-ilb failed
    Service:
    {
      "metadata": {
        "name": "web9vutne-ilb",
        "namespace": "default",
        "uid": "63123b46-8b38-40db-ade7-f8d435b4430e",
... skipping 101 lines ...
      "status": {
        "loadBalancer": {}
      }
    }
    LAST SEEN                      TYPE     REASON                  OBJECT                   MESSAGE
    2021-11-24 19:07:25 +0000 UTC  Normal   EnsuringLoadBalancer    service/web9vutne-ilb    Ensuring load balancer
    2021-11-24 19:07:26 +0000 UTC  Warning  FailedToCreateEndpoint  endpoints/web9vutne-ilb  Failed to create endpoint for service default/web9vutne-ilb: endpoints "web9vutne-ilb" already exists
    
    Expected
        <bool>: false
    to be true

    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:214
... skipping 43 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating a Windows Enabled cluster with dockershim [It] With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node 
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:214

Ran 2 of 24 Specs in 1602.781 seconds
FAIL! -- 1 Passed | 1 Failed | 0 Pending | 22 Skipped


Ginkgo ran 1 suite in 28m10.017642522s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
  - For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta
  - To comment, chime in at https://github.com/onsi/ginkgo/issues/711

To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc
make[1]: *** [Makefile:176: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:184: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...