This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdevigned: do not include customData in AzureMachinePool hash calculation
ResultFAILURE
Tests 1 failed / 1 succeeded
Started2021-02-27 13:26
Elapsed38m51s
Revision1e53f6a599926cc627a181ff8753b0802497d8b5
Refs 1197

Test Failures


capz-e2e Workload cluster creation Creating a Windows enabled VMSS cluster with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node 21m32s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\sWindows\senabled\sVMSS\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\san\sLinux\sAzureMachinePool\swith\s1\snodes\sand\sWindows\sAzureMachinePool\swith\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:468
Timed out after 180.116s.
Service default/web-elb failed
Service:
{
  "metadata": {
    "name": "web-elb",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/services/web-elb",
    "uid": "571ff3d9-77d3-41d7-ac48-7596217a1594",
    "resourceVersion": "1984",
    "creationTimestamp": "2021-02-27T13:44:22Z",
    "finalizers": [
      "service.kubernetes.io/load-balancer-cleanup"
    ],
    "managedFields": [
      {
        "manager": "cluster-api-e2e",
        "operation": "Update",
        "apiVersion": "v1",
        "time": "2021-02-27T13:44:22Z",
        "fieldsType": "FieldsV1",
        "fieldsV1": {
          "f:spec": {
            "f:externalTrafficPolicy": {},
            "f:ports": {
              ".": {},
              "k:{\"port\":80,\"protocol\":\"TCP\"}": {
                ".": {},
                "f:name": {},
                "f:port": {},
                "f:protocol": {},
                "f:targetPort": {}
              },
              "k:{\"port\":443,\"protocol\":\"TCP\"}": {
                ".": {},
                "f:name": {},
                "f:port": {},
                "f:protocol": {},
                "f:targetPort": {}
              }
            },
            "f:selector": {
              ".": {},
              "f:app": {}
            },
            "f:sessionAffinity": {},
            "f:type": {}
          }
        }
      },
      {
        "manager": "kube-controller-manager",
        "operation": "Update",
        "apiVersion": "v1",
        "time": "2021-02-27T13:44:22Z",
        "fieldsType": "FieldsV1",
        "fieldsV1": {
          "f:metadata": {
            "f:finalizers": {
              ".": {},
              "v:\"service.kubernetes.io/load-balancer-cleanup\"": {}
            }
          }
        }
      }
    ]
  },
  "spec": {
    "ports": [
      {
        "name": "http",
        "protocol": "TCP",
        "port": 80,
        "targetPort": 80,
        "nodePort": 31293
      },
      {
        "name": "https",
        "protocol": "TCP",
        "port": 443,
        "targetPort": 443,
        "nodePort": 31115
      }
    ],
    "selector": {
      "app": "web"
    },
    "clusterIP": "10.103.193.179",
    "type": "LoadBalancer",
    "sessionAffinity": "None",
    "externalTrafficPolicy": "Cluster"
  },
  "status": {
    "loadBalancer": {}
  }
}
LAST SEEN                      TYPE    REASON                OBJECT           MESSAGE
2021-02-27 13:44:22 +0000 UTC  Normal  EnsuringLoadBalancer  service/web-elb  Ensuring load balancer

Expected
    <bool>: false
to be true
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:195
				
				Click to see stdout/stderrfrom junit.e2e_suite.3.xml

Filter through log files | View test history on testgrid


Show 1 Passed Tests

Show 16 Skipped Tests

Error lines from build-log.txt

... skipping 434 lines ...
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-kjg2n, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-95f98, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-pmnlm, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-proxy-tqtcd, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-g7r25, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-71cn5d-control-plane-sxdv7, container kube-scheduler
STEP: Error starting logs stream for pod kube-system/kube-proxy-cvs7r, container kube-proxy: Get "https://10.1.0.5:10250/containerLogs/kube-system/kube-proxy-cvs7r/kube-proxy?follow=true": dial tcp 10.1.0.5:10250: connect: connection refused
STEP: Error starting logs stream for pod kube-system/kube-flannel-ds-amd64-kjg2n, container kube-flannel: Get "https://10.1.0.5:10250/containerLogs/kube-system/kube-flannel-ds-amd64-kjg2n/kube-flannel?follow=true": dial tcp 10.1.0.5:10250: connect: connection refused
STEP: Fetching activity logs took 619.038468ms
STEP: Dumping all the Cluster API resources in the "create-workload-cluster-u652jx" namespace
STEP: Deleting all clusters in the create-workload-cluster-u652jx namespace
STEP: Deleting cluster capz-e2e-71cn5d
INFO: Waiting for the Cluster create-workload-cluster-u652jx/capz-e2e-71cn5d to be deleted
STEP: Waiting for cluster capz-e2e-71cn5d to be deleted
STEP: Error starting logs stream for pod kube-system/kube-proxy-windows-95f98, container kube-proxy: Get "https://10.1.0.6:10250/containerLogs/kube-system/kube-proxy-windows-95f98/kube-proxy?follow=true": dial tcp 10.1.0.6:10250: i/o timeout
STEP: Error starting logs stream for pod kube-system/kube-flannel-ds-windows-amd64-mpsgb, container kube-flannel: Get "https://10.1.0.6:10250/containerLogs/kube-system/kube-flannel-ds-windows-amd64-mpsgb/kube-flannel?follow=true": dial tcp 10.1.0.6:10250: i/o timeout
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-pmnlm, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-wfq7f, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-71cn5d-control-plane-sxdv7, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-8cmc8, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-71cn5d-control-plane-sxdv7, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-tczqz, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-fjq6p, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-6khbk, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-71cn5d-control-plane-sxdv7, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-g7r25, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-71cn5d-control-plane-sxdv7, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-tqtcd, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace create-workload-cluster-u652jx
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 21m32s on Ginkgo node 3 of 3


... skipping 3 lines ...
  Creating a Windows enabled VMSS cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:467
    with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node [It]
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:468

    Timed out after 180.116s.
    Service default/web-elb failed
    Service:
    {
      "metadata": {
        "name": "web-elb",
        "namespace": "default",
        "selfLink": "/api/v1/namespaces/default/services/web-elb",
... skipping 186 lines ...
STEP: creating an external Load Balancer service
STEP: waiting for service default/web-elb to be available
STEP: connecting to the external LB service from a curl pod
STEP: waiting for job default/curl-to-elb-jobzbvwc to be complete
STEP: connecting directly to the external LB service
2021/02/27 13:50:55 [DEBUG] GET http://20.73.232.199
2021/02/27 13:51:25 [ERR] GET http://20.73.232.199 request failed: Get "http://20.73.232.199": dial tcp 20.73.232.199:80: i/o timeout
2021/02/27 13:51:25 [DEBUG] GET http://20.73.232.199: retrying in 1s (4 left)
STEP: deleting the test resources
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/web-windows to be available
STEP: creating an internal Load Balancer service
... skipping 7 lines ...
STEP: waiting for job default/curl-to-elb-jobs3n9l to be complete
STEP: connecting directly to the external LB service
2021/02/27 13:55:22 [DEBUG] GET http://20.73.233.237
STEP: deleting the test resources
STEP: Dumping logs from the "capz-e2e-fvpjso" workload cluster
STEP: Dumping workload cluster create-workload-cluster-3cxrio/capz-e2e-fvpjso logs
Failed to get logs for machine capz-e2e-fvpjso-md-win-6578f4576d-dzmrd, cluster create-workload-cluster-3cxrio/capz-e2e-fvpjso: dialing from control plane to target node at capz-e2e-fvpjso-md-win-6c5fp: ssh: rejected: connect failed (Temporary failure in name resolution)
STEP: Dumping workload cluster create-workload-cluster-3cxrio/capz-e2e-fvpjso kube-system pod logs
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-fvpjso-control-plane-bnkkc, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-rfsxp, container kube-proxy
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-fvpjso-control-plane-bnkkc, container etcd
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-fvpjso-control-plane-w6r66, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-fvpjso-control-plane-w6r66, container kube-apiserver
... skipping 15 lines ...
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-fvpjso-control-plane-bnkkc, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-qs7rr, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-p7xb7, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-fvpjso-control-plane-csg5s, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-fvpjso-control-plane-csg5s, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-xrqd4, container kube-proxy
STEP: Got error while iterating over activity logs for resource group capz-e2e-fvpjso: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000411036s
STEP: Dumping all the Cluster API resources in the "create-workload-cluster-3cxrio" namespace
STEP: Deleting all clusters in the create-workload-cluster-3cxrio namespace
STEP: Deleting cluster capz-e2e-fvpjso
INFO: Waiting for the Cluster create-workload-cluster-3cxrio/capz-e2e-fvpjso to be deleted
STEP: Waiting for cluster capz-e2e-fvpjso to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-fvpjso-control-plane-w6r66, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-p7xb7, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-rfsxp, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vwdsg, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-ghbbh, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-fvpjso-control-plane-w6r66, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-f2bvr, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-qs7rr, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-fvpjso-control-plane-w6r66, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-lq55x, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-fvpjso-control-plane-csg5s, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-fvpjso-control-plane-csg5s, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-fvpjso-control-plane-bnkkc, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-fvpjso-control-plane-w6r66, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-fvpjso-control-plane-csg5s, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-ckxcw, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-l9sh7, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-hhqqc, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-fvpjso-control-plane-csg5s, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-fvpjso-control-plane-bnkkc, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-8p8kf, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-fvpjso-control-plane-bnkkc, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-xrqd4, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-fvpjso-control-plane-bnkkc, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace create-workload-cluster-3cxrio
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 32m1s on Ginkgo node 1 of 3


... skipping 8 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating a Windows enabled VMSS cluster [It] with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node 
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:195

Ran 2 of 18 Specs in 2064.751 seconds
FAIL! -- 1 Passed | 1 Failed | 0 Pending | 16 Skipped


Ginkgo ran 1 suite in 35m25.943079067s
Test Suite Failed
make[1]: *** [Makefile:169: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:177: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...