This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdevigned: do not include customData in AzureMachinePool hash calculation
ResultFAILURE
Tests 1 failed / 1 succeeded
Started2021-02-26 16:46
Elapsed35m59s
Revision1e53f6a599926cc627a181ff8753b0802497d8b5
Refs 1197

Test Failures


capz-e2e Workload cluster creation Creating a Windows enabled VMSS cluster with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node 25m25s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\sWindows\senabled\sVMSS\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\san\sLinux\sAzureMachinePool\swith\s1\snodes\sand\sWindows\sAzureMachinePool\swith\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:468
Timed out after 900.000s.
Expected
    <int>: 0
to equal
    <int>: 1
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api@v0.3.11-0.20210209200458-51a6d64d171c/test/framework/machinepool_helpers.go:85
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 1 Passed Tests

Show 16 Skipped Tests

Error lines from build-log.txt

... skipping 529 lines ...
STEP: creating an external Load Balancer service
STEP: waiting for service default/web-elb to be available
STEP: connecting to the external LB service from a curl pod
STEP: waiting for job default/curl-to-elb-jobw840p to be complete
STEP: connecting directly to the external LB service
2021/02/26 17:09:11 [DEBUG] GET http://52.248.98.154
2021/02/26 17:09:41 [ERR] GET http://52.248.98.154 request failed: Get "http://52.248.98.154": dial tcp 52.248.98.154:80: i/o timeout
2021/02/26 17:09:41 [DEBUG] GET http://52.248.98.154: retrying in 1s (4 left)
STEP: deleting the test resources
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/web-windows to be available
STEP: creating an internal Load Balancer service
... skipping 4 lines ...
STEP: creating an external Load Balancer service
STEP: waiting for service default/web-windows-elb to be available
STEP: connecting to the external LB service from a curl pod
STEP: waiting for job default/curl-to-elb-job6d96r to be complete
STEP: connecting directly to the external LB service
2021/02/26 17:12:24 [DEBUG] GET http://52.248.103.41
2021/02/26 17:12:54 [ERR] GET http://52.248.103.41 request failed: Get "http://52.248.103.41": dial tcp 52.248.103.41:80: i/o timeout
2021/02/26 17:12:54 [DEBUG] GET http://52.248.103.41: retrying in 1s (4 left)
2021/02/26 17:13:25 [ERR] GET http://52.248.103.41 request failed: Get "http://52.248.103.41": dial tcp 52.248.103.41:80: i/o timeout
2021/02/26 17:13:25 [DEBUG] GET http://52.248.103.41: retrying in 2s (3 left)
STEP: deleting the test resources
STEP: Dumping logs from the "capz-e2e-wsb7ln" workload cluster
STEP: Dumping workload cluster create-workload-cluster-nypmyg/capz-e2e-wsb7ln logs
Failed to get logs for machine capz-e2e-wsb7ln-md-win-d7559547-hbdq6, cluster create-workload-cluster-nypmyg/capz-e2e-wsb7ln: dialing from control plane to target node at capz-e2e-wsb7ln-md-win-bkkjj: ssh: rejected: connect failed (Temporary failure in name resolution)
STEP: Dumping workload cluster create-workload-cluster-nypmyg/capz-e2e-wsb7ln kube-system pod logs
STEP: Fetching kube-system pod logs took 357.107034ms
STEP: Dumping workload cluster create-workload-cluster-nypmyg/capz-e2e-wsb7ln Azure activity log
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-windows-amd64-r4vgw, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-wsb7ln-control-plane-wp2xl, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-wsb7ln-control-plane-hrsd5, container kube-scheduler
... skipping 15 lines ...
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-wsb7ln-control-plane-hrsd5, container etcd
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-wsb7ln-control-plane-hrsd5, container kube-apiserver
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-wsb7ln-control-plane-ntktt, container etcd
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-wsb7ln-control-plane-ntktt, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-82rnm, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-wsb7ln-control-plane-wp2xl, container kube-controller-manager
STEP: Got error while iterating over activity logs for resource group capz-e2e-wsb7ln: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000571417s
STEP: Dumping all the Cluster API resources in the "create-workload-cluster-nypmyg" namespace
STEP: Deleting all clusters in the create-workload-cluster-nypmyg namespace
STEP: Deleting cluster capz-e2e-wsb7ln
INFO: Waiting for the Cluster create-workload-cluster-nypmyg/capz-e2e-wsb7ln to be deleted
STEP: Waiting for cluster capz-e2e-wsb7ln to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-wsb7ln-control-plane-wp2xl, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-wsb7ln-control-plane-ntktt, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5ldwt, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-wsb7ln-control-plane-wp2xl, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-j9hb4, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-wsb7ln-control-plane-hrsd5, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-wsb7ln-control-plane-ntktt, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-wsb7ln-control-plane-hrsd5, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5lp2n, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-wsb7ln-control-plane-wp2xl, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-wsb7ln-control-plane-hrsd5, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-wsb7ln-control-plane-hrsd5, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-p7hhs, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-r4vgw, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-8pn4j, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-82rnm, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-kdk4j, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-wsb7ln-control-plane-ntktt, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-wsb7ln-control-plane-ntktt, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-s7v7k, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-tnspn, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-w5cd2, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-vpzn9, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-wsb7ln-control-plane-wp2xl, container kube-scheduler: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace create-workload-cluster-nypmyg
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 26m25s on Ginkgo node 3 of 3


... skipping 8 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating a Windows enabled VMSS cluster [It] with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node 
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api@v0.3.11-0.20210209200458-51a6d64d171c/test/framework/machinepool_helpers.go:85

Ran 2 of 18 Specs in 1776.896 seconds
FAIL! -- 1 Passed | 1 Failed | 0 Pending | 16 Skipped


Ginkgo ran 1 suite in 31m4.467484946s
Test Suite Failed
make[1]: *** [Makefile:169: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:177: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...