This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdevigned: do not include customData in AzureMachinePool hash calculation
ResultFAILURE
Tests 1 failed / 1 succeeded
Started2021-02-26 00:30
Elapsed37m7s
Revisiondb46badd0cf8f21fcb7fd8831522c7cf978a1637
Refs 1197

Test Failures


capz-e2e Workload cluster creation Creating a Windows Enabled cluster With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node 28m48s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\sWindows\sEnabled\scluster\sWith\s3\scontrol\-plane\snodes\sand\s1\sLinux\sworker\snode\sand\s1\sWindows\sworker\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:420
Unexpected error:
    <*errors.StatusError | 0xc000806640>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "etcdserver: leader changed",
            Reason: "",
            Details: nil,
            Code: 500,
        },
    }
    etcdserver: leader changed
occurred
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_lb.go:170
				
				Click to see stdout/stderrfrom junit.e2e_suite.2.xml

Filter through log files | View test history on testgrid


Show 1 Passed Tests

Show 16 Skipped Tests

Error lines from build-log.txt

... skipping 452 lines ...
STEP: Fetching activity logs took 537.375453ms
STEP: Dumping all the Cluster API resources in the "create-workload-cluster-6habqg" namespace
STEP: Deleting all clusters in the create-workload-cluster-6habqg namespace
STEP: Deleting cluster capz-e2e-yqi54f
INFO: Waiting for the Cluster create-workload-cluster-6habqg/capz-e2e-yqi54f to be deleted
STEP: Waiting for cluster capz-e2e-yqi54f to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-yqi54f-control-plane-p7mrr, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-5lsqp, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-yqi54f-control-plane-p7mrr, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-yqi54f-control-plane-p7mrr, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-gkmqg, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-6tdg9, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-c4gkd, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-sr9lh, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-wfqtd, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-zcwlc, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-yqi54f-control-plane-p7mrr, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-np4f6, container kube-flannel: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace create-workload-cluster-6habqg
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 17m1s on Ginkgo node 1 of 3


... skipping 69 lines ...
STEP: deleting the ilb test resources
STEP: creating an external Load Balancer service
STEP: waiting for service default/web-windows-elb to be available
STEP: connecting to the external LB service from a curl pod
STEP: Dumping logs from the "capz-e2e-c4wegj" workload cluster
STEP: Dumping workload cluster create-workload-cluster-lhren2/capz-e2e-c4wegj logs
Failed to get logs for machine capz-e2e-c4wegj-md-win-79df4c9774-pj762, cluster create-workload-cluster-lhren2/capz-e2e-c4wegj: dialing from control plane to target node at capz-e2e-c4wegj-md-win-bjkxr: ssh: rejected: connect failed (Temporary failure in name resolution)
STEP: Dumping workload cluster create-workload-cluster-lhren2/capz-e2e-c4wegj kube-system pod logs
STEP: Fetching kube-system pod logs took 807.43861ms
STEP: Dumping workload cluster create-workload-cluster-lhren2/capz-e2e-c4wegj Azure activity log
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-c4wegj-control-plane-jj6xg, container etcd
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-c4wegj-control-plane-zjpnt, container etcd
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-c4wegj-control-plane-zjpnt, container kube-controller-manager
... skipping 15 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-875l5, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-cvbnx, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-proxy-wr4pk, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-p55r4, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-7h8sd, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-prfhc, container kube-flannel
STEP: Got error while iterating over activity logs for resource group capz-e2e-c4wegj: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000604035s
STEP: Dumping all the Cluster API resources in the "create-workload-cluster-lhren2" namespace
STEP: Deleting all clusters in the create-workload-cluster-lhren2 namespace
STEP: Deleting cluster capz-e2e-c4wegj
INFO: Waiting for the Cluster create-workload-cluster-lhren2/capz-e2e-c4wegj to be deleted
STEP: Waiting for cluster capz-e2e-c4wegj to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-p55r4, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-c4wegj-control-plane-zjpnt, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-c4wegj-control-plane-5h55w, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-c4wegj-control-plane-zjpnt, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-c4wegj-control-plane-jj6xg, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-prfhc, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-c4wegj-control-plane-jj6xg, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-z92gc, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-c4wegj-control-plane-5h55w, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-875l5, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-cvbnx, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-wr4pk, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-c4wegj-control-plane-jj6xg, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-c4wegj-control-plane-jj6xg, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-c4wegj-control-plane-zjpnt, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-7h8sd, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-8t282, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-c4wegj-control-plane-5h55w, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-c4wegj-control-plane-zjpnt, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-m2nvz, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-82tmh, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-c4wegj-control-plane-5h55w, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-c6ptv, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-55cm5, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace create-workload-cluster-lhren2
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 28m48s on Ginkgo node 2 of 3


... skipping 2 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:39
  Creating a Windows Enabled cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:418
    With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node [It]
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:420

    Unexpected error:
        <*errors.StatusError | 0xc000806640>: {
            ErrStatus: {
                TypeMeta: {Kind: "", APIVersion: ""},
                ListMeta: {
                    SelfLink: "",
                    ResourceVersion: "",
... skipping 55 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating a Windows Enabled cluster [It] With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node 
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_lb.go:170

Ran 2 of 18 Specs in 1877.889 seconds
FAIL! -- 1 Passed | 1 Failed | 0 Pending | 16 Skipped


Ginkgo ran 1 suite in 32m37.815864669s
Test Suite Failed
make[1]: *** [Makefile:166: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:174: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...