Recent runs || View in Spyglass
PR | devigned: do not include customData in AzureMachinePool hash calculation |
Result | FAILURE |
Tests | 1 failed / 1 succeeded |
Started | |
Elapsed | 35m59s |
Revision | 1e53f6a599926cc627a181ff8753b0802497d8b5 |
Refs |
1197 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\sWindows\senabled\sVMSS\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\san\sLinux\sAzureMachinePool\swith\s1\snodes\sand\sWindows\sAzureMachinePool\swith\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:468 Timed out after 900.000s. Expected <int>: 0 to equal <int>: 1 /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api@v0.3.11-0.20210209200458-51a6d64d171c/test/framework/machinepool_helpers.go:85from junit.e2e_suite.1.xml
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Fri, 26 Feb 2021 16:55:36 UTC on Ginkgo node 1 of 3 �[1mSTEP�[0m: Creating a namespace for hosting the "create-workload-cluster" test spec INFO: Creating namespace create-workload-cluster-d1zib7 INFO: Creating event watcher for namespace "create-workload-cluster-d1zib7" INFO: Creating the workload cluster with name "capz-e2e-bb4hms" using the "machine-pool-windows" template (Kubernetes v1.19.7, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-bb4hms --infrastructure (default) --kubernetes-version v1.19.7 --control-plane-machine-count 1 --worker-machine-count 1 --flavor machine-pool-windows INFO: Applying the cluster template yaml to the cluster cluster.cluster.x-k8s.io/capz-e2e-bb4hms created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-bb4hms created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-bb4hms-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-bb4hms-control-plane created machinepool.exp.cluster.x-k8s.io/capz-e2e-bb4hms-mp-0 created azuremachinepool.exp.infrastructure.cluster.x-k8s.io/capz-e2e-bb4hms-mp-0 created kubeadmconfig.bootstrap.cluster.x-k8s.io/capz-e2e-bb4hms-mp-0 created machinepool.exp.cluster.x-k8s.io/capz-e2e-bb4hms-mp-win created azuremachinepool.exp.infrastructure.cluster.x-k8s.io/capz-e2e-bb4hms-mp-win created kubeadmconfig.bootstrap.cluster.x-k8s.io/capz-e2e-bb4hms-mp-win created configmap/cni-capz-e2e-bb4hms-crs-0 created clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-bb4hms-crs-0 created INFO: Waiting for the cluster infrastructure to be provisioned �[1mSTEP�[0m: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by create-workload-cluster-d1zib7/capz-e2e-bb4hms-control-plane to be provisioned �[1mSTEP�[0m: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for control plane create-workload-cluster-d1zib7/capz-e2e-bb4hms-control-plane to be ready (implies underlying nodes to be ready as well) �[1mSTEP�[0m: Waiting for the control plane to be ready INFO: Waiting for the machine deployments to be provisioned INFO: Waiting for the machine pools to be provisioned �[1mSTEP�[0m: Waiting for the machine pool workload nodes to exist �[1mSTEP�[0m: Unable to dump workload cluster logs as the cluster is nil �[1mSTEP�[0m: Dumping all the Cluster API resources in the "create-workload-cluster-d1zib7" namespace �[1mSTEP�[0m: Deleting all clusters in the create-workload-cluster-d1zib7 namespace �[1mSTEP�[0m: Deleting cluster capz-e2e-bb4hms INFO: Waiting for the Cluster create-workload-cluster-d1zib7/capz-e2e-bb4hms to be deleted �[1mSTEP�[0m: Waiting for cluster capz-e2e-bb4hms to be deleted �[1mSTEP�[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace create-workload-cluster-d1zib7 �[1mSTEP�[0m: Redacting sensitive information from logs INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 25m25s on Ginkgo node 1 of 3
Filter through log files | View test history on testgrid
capz-e2e Workload cluster creation Creating a Windows Enabled cluster With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
capz-e2e Conformance Tests conformance-tests
capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd in a HA cluster
capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd in a single control plane cluster
capz-e2e Running the Cluster API E2E tests Running the MachineDeployment upgrade spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capz-e2e Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e Running the Cluster API E2E tests Should adopt up-to-date control plane Machines without modification Should adopt up-to-date control plane Machines without modification
capz-e2e Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e Workload cluster creation Creating a GPU-enabled cluster with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating a VMSS cluster with a single control plane node and an AzureMachinePool with 2 nodes
capz-e2e Workload cluster creation Creating a cluster using a different SP identity with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating a ipv6 control-plane cluster With ipv6 worker node
capz-e2e Workload cluster creation Creating a private cluster Creates a public management cluster in the same vnet
capz-e2e Workload cluster creation With 3 control-plane nodes and 2 worker nodes
... skipping 529 lines ... [1mSTEP[0m: creating an external Load Balancer service [1mSTEP[0m: waiting for service default/web-elb to be available [1mSTEP[0m: connecting to the external LB service from a curl pod [1mSTEP[0m: waiting for job default/curl-to-elb-jobw840p to be complete [1mSTEP[0m: connecting directly to the external LB service 2021/02/26 17:09:11 [DEBUG] GET http://52.248.98.154 2021/02/26 17:09:41 [ERR] GET http://52.248.98.154 request failed: Get "http://52.248.98.154": dial tcp 52.248.98.154:80: i/o timeout 2021/02/26 17:09:41 [DEBUG] GET http://52.248.98.154: retrying in 1s (4 left) [1mSTEP[0m: deleting the test resources [1mSTEP[0m: creating a Kubernetes client to the workload cluster [1mSTEP[0m: creating an HTTP deployment [1mSTEP[0m: waiting for deployment default/web-windows to be available [1mSTEP[0m: creating an internal Load Balancer service ... skipping 4 lines ... [1mSTEP[0m: creating an external Load Balancer service [1mSTEP[0m: waiting for service default/web-windows-elb to be available [1mSTEP[0m: connecting to the external LB service from a curl pod [1mSTEP[0m: waiting for job default/curl-to-elb-job6d96r to be complete [1mSTEP[0m: connecting directly to the external LB service 2021/02/26 17:12:24 [DEBUG] GET http://52.248.103.41 2021/02/26 17:12:54 [ERR] GET http://52.248.103.41 request failed: Get "http://52.248.103.41": dial tcp 52.248.103.41:80: i/o timeout 2021/02/26 17:12:54 [DEBUG] GET http://52.248.103.41: retrying in 1s (4 left) 2021/02/26 17:13:25 [ERR] GET http://52.248.103.41 request failed: Get "http://52.248.103.41": dial tcp 52.248.103.41:80: i/o timeout 2021/02/26 17:13:25 [DEBUG] GET http://52.248.103.41: retrying in 2s (3 left) [1mSTEP[0m: deleting the test resources [1mSTEP[0m: Dumping logs from the "capz-e2e-wsb7ln" workload cluster [1mSTEP[0m: Dumping workload cluster create-workload-cluster-nypmyg/capz-e2e-wsb7ln logs Failed to get logs for machine capz-e2e-wsb7ln-md-win-d7559547-hbdq6, cluster create-workload-cluster-nypmyg/capz-e2e-wsb7ln: dialing from control plane to target node at capz-e2e-wsb7ln-md-win-bkkjj: ssh: rejected: connect failed (Temporary failure in name resolution) [1mSTEP[0m: Dumping workload cluster create-workload-cluster-nypmyg/capz-e2e-wsb7ln kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 357.107034ms [1mSTEP[0m: Dumping workload cluster create-workload-cluster-nypmyg/capz-e2e-wsb7ln Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/kube-flannel-ds-windows-amd64-r4vgw, container kube-flannel [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-wsb7ln-control-plane-wp2xl, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-wsb7ln-control-plane-hrsd5, container kube-scheduler ... skipping 15 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-capz-e2e-wsb7ln-control-plane-hrsd5, container etcd [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-wsb7ln-control-plane-hrsd5, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-capz-e2e-wsb7ln-control-plane-ntktt, container etcd [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-wsb7ln-control-plane-ntktt, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-82rnm, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-wsb7ln-control-plane-wp2xl, container kube-controller-manager [1mSTEP[0m: Got error while iterating over activity logs for resource group capz-e2e-wsb7ln: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded [1mSTEP[0m: Fetching activity logs took 30.000571417s [1mSTEP[0m: Dumping all the Cluster API resources in the "create-workload-cluster-nypmyg" namespace [1mSTEP[0m: Deleting all clusters in the create-workload-cluster-nypmyg namespace [1mSTEP[0m: Deleting cluster capz-e2e-wsb7ln INFO: Waiting for the Cluster create-workload-cluster-nypmyg/capz-e2e-wsb7ln to be deleted [1mSTEP[0m: Waiting for cluster capz-e2e-wsb7ln to be deleted [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-wsb7ln-control-plane-wp2xl, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-wsb7ln-control-plane-ntktt, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-5ldwt, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-wsb7ln-control-plane-wp2xl, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-j9hb4, container kube-flannel: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-wsb7ln-control-plane-hrsd5, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-wsb7ln-control-plane-ntktt, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-wsb7ln-control-plane-hrsd5, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-5lp2n, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-wsb7ln-control-plane-wp2xl, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-wsb7ln-control-plane-hrsd5, container kube-apiserver: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-wsb7ln-control-plane-hrsd5, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-p7hhs, container kube-flannel: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-r4vgw, container kube-flannel: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-windows-8pn4j, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-82rnm, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-kdk4j, container kube-flannel: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-wsb7ln-control-plane-ntktt, container etcd: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-wsb7ln-control-plane-ntktt, container kube-controller-manager: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-s7v7k, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-tnspn, container coredns: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-proxy-w5cd2, container kube-proxy: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-vpzn9, container kube-flannel: http2: client connection lost [1mSTEP[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-wsb7ln-control-plane-wp2xl, container kube-scheduler: http2: client connection lost [1mSTEP[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace create-workload-cluster-nypmyg [1mSTEP[0m: Redacting sensitive information from logs INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 26m25s on Ginkgo node 3 of 3 ... skipping 8 lines ... [1mSTEP[0m: Tearing down the management cluster [91m[1mSummarizing 1 Failure:[0m [91m[1m[Fail] [0m[90mWorkload cluster creation [0m[0mCreating a Windows enabled VMSS cluster [0m[91m[1m[It] with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node [0m [37m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api@v0.3.11-0.20210209200458-51a6d64d171c/test/framework/machinepool_helpers.go:85[0m [1m[91mRan 2 of 18 Specs in 1776.896 seconds[0m [1m[91mFAIL![0m -- [32m[1m1 Passed[0m | [91m[1m1 Failed[0m | [33m[1m0 Pending[0m | [36m[1m16 Skipped[0m Ginkgo ran 1 suite in 31m4.467484946s Test Suite Failed make[1]: *** [Makefile:169: test-e2e-run] Error 1 make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make: *** [Makefile:177: test-e2e] Error 2 ================ REDACTING LOGS ================ All sensitive variables are redacted + EXIT_VALUE=2 + set +o xtrace Cleaning up after docker in docker. ================================================================================ ... skipping 5 lines ...