Recent runs || View in Spyglass
PR | devigned: do not include customData in AzureMachinePool hash calculation |
Result | FAILURE |
Tests | 3 failed / 2 succeeded |
Started | |
Elapsed | 58m59s |
Revision | 11b61d46099c9772fd1ca514c15fbef2d6063d47 |
Refs |
1197 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\sVMSS\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\san\sAzureMachinePool\swith\s2\snodes$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:265 Timed out after 900.001s. Expected <int>: 0 to equal <int>: 2 /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api@v0.3.11-0.20210209200458-51a6d64d171c/test/framework/machinepool_helpers.go:85from junit.e2e_suite.2.xml
INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" started at Tue, 02 Mar 2021 18:58:34 UTC on Ginkgo node 2 of 3 �[1mSTEP�[0m: Creating a namespace for hosting the "create-workload-cluster" test spec INFO: Creating namespace create-workload-cluster-jf9wjo INFO: Creating event watcher for namespace "create-workload-cluster-jf9wjo" INFO: Creating the workload cluster with name "capz-e2e-r17mb8" using the "machine-pool" template (Kubernetes v1.19.7, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-r17mb8 --infrastructure (default) --kubernetes-version v1.19.7 --control-plane-machine-count 1 --worker-machine-count 2 --flavor machine-pool INFO: Applying the cluster template yaml to the cluster cluster.cluster.x-k8s.io/capz-e2e-r17mb8 created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-r17mb8 created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-r17mb8-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-r17mb8-control-plane created machinepool.exp.cluster.x-k8s.io/capz-e2e-r17mb8-mp-0 created azuremachinepool.exp.infrastructure.cluster.x-k8s.io/capz-e2e-r17mb8-mp-0 created kubeadmconfig.bootstrap.cluster.x-k8s.io/capz-e2e-r17mb8-mp-0 created configmap/cni-capz-e2e-r17mb8-crs-0 created clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-r17mb8-crs-0 created INFO: Waiting for the cluster infrastructure to be provisioned �[1mSTEP�[0m: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by create-workload-cluster-jf9wjo/capz-e2e-r17mb8-control-plane to be provisioned �[1mSTEP�[0m: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for control plane create-workload-cluster-jf9wjo/capz-e2e-r17mb8-control-plane to be ready (implies underlying nodes to be ready as well) �[1mSTEP�[0m: Waiting for the control plane to be ready INFO: Waiting for the machine deployments to be provisioned INFO: Waiting for the machine pools to be provisioned �[1mSTEP�[0m: Waiting for the machine pool workload nodes to exist �[1mSTEP�[0m: Dumping logs from the "capz-e2e-u5kq1f" workload cluster �[1mSTEP�[0m: Dumping workload cluster create-workload-cluster-fmvse4/capz-e2e-u5kq1f logs �[1mSTEP�[0m: Dumping workload cluster create-workload-cluster-fmvse4/capz-e2e-u5kq1f kube-system pod logs �[1mSTEP�[0m: Redacting sensitive information from logs
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\sipv6\scontrol\-plane\scluster\sWith\sipv6\sworker\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:218 Expected success, but got an error: <*errors.withStack | 0xc0006ec760>: { error: [ { error: { cause: { Op: "dial", Net: "tcp", Source: nil, Addr: {IP: "4\xb9ӯ", Port: 22, Zone: ""}, Err: {Syscall: "connect", Err: 0x6e}, }, msg: "dialing public load balancer at capz-e2e-u5kq1f-ecd4a09.southcentralus.cloudapp.azure.com", }, stack: [0x1a1cdf2, 0x1a23ec5, 0x16cf993, 0x472401], }, { error: { cause: { Op: "dial", Net: "tcp", Source: nil, Addr: {IP: "4\xb9ӯ", Port: 22, Zone: ""}, Err: {Syscall: "connect", Err: 0x6e}, }, msg: "dialing public load balancer at capz-e2e-u5kq1f-ecd4a09.southcentralus.cloudapp.azure.com", }, stack: [0x1a1cdf2, 0x1a23ec5, 0x16cf993, 0x472401], }, ], stack: [0x16cf461, 0x16cf3f1, 0x16cf87e, 0x1a16c6d, 0x1a28a4c, 0x89b4e3, 0x8a93ca, 0x1a2916f, 0x883863, 0x883477, 0x882887, 0x8899d1, 0x889112, 0x898831, 0x898347, 0x897b57, 0x89a326, 0x8a8b18, 0x8a884d, 0x1a1fc17, 0x52b3cf, 0x472401], } dialing public load balancer at capz-e2e-u5kq1f-ecd4a09.southcentralus.cloudapp.azure.com: dial tcp 52.185.211.175:22: connect: connection timed out /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_timesync.go:88from junit.e2e_suite.2.xml
INFO: "With ipv6 worker node" started at Tue, 02 Mar 2021 18:35:16 UTC on Ginkgo node 2 of 3 �[1mSTEP�[0m: Creating a namespace for hosting the "create-workload-cluster" test spec INFO: Creating namespace create-workload-cluster-fmvse4 INFO: Creating event watcher for namespace "create-workload-cluster-fmvse4" INFO: Creating the workload cluster with name "capz-e2e-u5kq1f" using the "ipv6" template (Kubernetes v1.19.7, 3 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-u5kq1f --infrastructure (default) --kubernetes-version v1.19.7 --control-plane-machine-count 3 --worker-machine-count 1 --flavor ipv6 INFO: Applying the cluster template yaml to the cluster cluster.cluster.x-k8s.io/capz-e2e-u5kq1f created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-u5kq1f created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-u5kq1f-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-u5kq1f-control-plane created machinedeployment.cluster.x-k8s.io/capz-e2e-u5kq1f-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-u5kq1f-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-u5kq1f-md-0 created configmap/cni-capz-e2e-u5kq1f-crs-0 created clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-u5kq1f-crs-0 created INFO: Waiting for the cluster infrastructure to be provisioned �[1mSTEP�[0m: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by create-workload-cluster-fmvse4/capz-e2e-u5kq1f-control-plane to be provisioned �[1mSTEP�[0m: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for the remaining control plane machines managed by create-workload-cluster-fmvse4/capz-e2e-u5kq1f-control-plane to be provisioned �[1mSTEP�[0m: Waiting for all control plane nodes to exist INFO: Waiting for control plane create-workload-cluster-fmvse4/capz-e2e-u5kq1f-control-plane to be ready (implies underlying nodes to be ready as well) �[1mSTEP�[0m: Waiting for the control plane to be ready INFO: Waiting for the machine deployments to be provisioned �[1mSTEP�[0m: Waiting for the workload nodes to exist INFO: Waiting for the machine pools to be provisioned �[1mSTEP�[0m: checking that time synchronization is healthy on capz-e2e-u5kq1f-control-plane-28wz9 �[1mSTEP�[0m: checking that time synchronization is healthy on capz-e2e-u5kq1f-control-plane-srxdl �[1mSTEP�[0m: checking that time synchronization is healthy on capz-e2e-u5kq1f-control-plane-szjrn �[1mSTEP�[0m: checking that time synchronization is healthy on capz-e2e-u5kq1f-md-0-4gz7t �[1mSTEP�[0m: Dumping logs from the "capz-e2e-u5kq1f" workload cluster �[1mSTEP�[0m: Dumping workload cluster create-workload-cluster-fmvse4/capz-e2e-u5kq1f logs Failed to get logs for machine capz-e2e-u5kq1f-control-plane-4nwxz, cluster create-workload-cluster-fmvse4/capz-e2e-u5kq1f: dialing public load balancer at capz-e2e-u5kq1f-ecd4a09.southcentralus.cloudapp.azure.com: dial tcp 52.185.211.175:22: connect: connection timed out �[1mSTEP�[0m: Dumping workload cluster create-workload-cluster-fmvse4/capz-e2e-u5kq1f kube-system pod logs �[1mSTEP�[0m: Fetching kube-system pod logs took 415.155136ms �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-u5kq1f-control-plane-srxdl, container kube-scheduler �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-u5kq1f-control-plane-28wz9, container kube-apiserver �[1mSTEP�[0m: Creating log watcher for controller kube-system/calico-kube-controllers-8f59968d4-d4fsb, container calico-kube-controllers �[1mSTEP�[0m: Creating log watcher for controller kube-system/calico-node-67wxx, container calico-node �[1mSTEP�[0m: Creating log watcher for controller kube-system/calico-node-qkkb7, container calico-node �[1mSTEP�[0m: Creating log watcher for controller kube-system/calico-node-tcr5g, container calico-node �[1mSTEP�[0m: Creating log watcher for controller kube-system/calico-node-tr74d, container calico-node �[1mSTEP�[0m: Creating log watcher for controller kube-system/coredns-f9fd979d6-7skhc, container coredns �[1mSTEP�[0m: Creating log watcher for controller kube-system/etcd-capz-e2e-u5kq1f-control-plane-srxdl, container etcd �[1mSTEP�[0m: Creating log watcher for controller kube-system/etcd-capz-e2e-u5kq1f-control-plane-szjrn, container etcd �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-7v8zr, container kube-proxy �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-u5kq1f-control-plane-srxdl, container kube-apiserver �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-u5kq1f-control-plane-szjrn, container kube-apiserver �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-u5kq1f-control-plane-28wz9, container kube-controller-manager �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-u5kq1f-control-plane-srxdl, container kube-controller-manager �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-u5kq1f-control-plane-szjrn, container kube-controller-manager �[1mSTEP�[0m: Creating log watcher for controller kube-system/coredns-f9fd979d6-s9wn4, container coredns �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-m2qdn, container kube-proxy �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-fnfdc, container kube-proxy �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-gtvpr, container kube-proxy �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-u5kq1f-control-plane-28wz9, container kube-scheduler �[1mSTEP�[0m: Dumping workload cluster create-workload-cluster-fmvse4/capz-e2e-u5kq1f Azure activity log �[1mSTEP�[0m: Creating log watcher for controller kube-system/etcd-capz-e2e-u5kq1f-control-plane-28wz9, container etcd �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-u5kq1f-control-plane-szjrn, container kube-scheduler �[1mSTEP�[0m: Fetching activity logs took 899.184656ms �[1mSTEP�[0m: Dumping all the Cluster API resources in the "create-workload-cluster-fmvse4" namespace �[1mSTEP�[0m: Deleting all clusters in the create-workload-cluster-fmvse4 namespace �[1mSTEP�[0m: Deleting cluster capz-e2e-u5kq1f INFO: Waiting for the Cluster create-workload-cluster-fmvse4/capz-e2e-u5kq1f to be deleted �[1mSTEP�[0m: Waiting for cluster capz-e2e-u5kq1f to be deleted �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/calico-node-qkkb7, container calico-node: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-u5kq1f-control-plane-srxdl, container kube-apiserver: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-proxy-m2qdn, container kube-proxy: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-u5kq1f-control-plane-szjrn, container kube-apiserver: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-u5kq1f-control-plane-szjrn, container etcd: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-u5kq1f-control-plane-srxdl, container etcd: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/calico-kube-controllers-8f59968d4-d4fsb, container calico-kube-controllers: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-7skhc, container coredns: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/calico-node-tcr5g, container calico-node: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-u5kq1f-control-plane-28wz9, container kube-controller-manager: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/calico-node-67wxx, container calico-node: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-u5kq1f-control-plane-28wz9, container etcd: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-u5kq1f-control-plane-srxdl, container kube-controller-manager: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-proxy-fnfdc, container kube-proxy: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-proxy-7v8zr, container kube-proxy: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-u5kq1f-control-plane-szjrn, container kube-scheduler: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/calico-node-tr74d, container calico-node: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-u5kq1f-control-plane-28wz9, container kube-apiserver: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-proxy-gtvpr, container kube-proxy: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-s9wn4, container coredns: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-u5kq1f-control-plane-srxdl, container kube-scheduler: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-u5kq1f-control-plane-28wz9, container kube-scheduler: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-u5kq1f-control-plane-szjrn, container kube-controller-manager: http2: client connection lost �[1mSTEP�[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace create-workload-cluster-fmvse4 �[1mSTEP�[0m: Redacting sensitive information from logs INFO: "With ipv6 worker node" ran for 23m18s on Ginkgo node 2 of 3
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sWith\s3\scontrol\-plane\snodes\sand\s2\sworker\snodes$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:143 Expected success, but got an error: <*errors.withStack | 0xc00094bc60>: { error: [ { error: { cause: { Op: "dial", Net: "tcp", Source: nil, Addr: {IP: "4\xb9\xd3Z", Port: 22, Zone: ""}, Err: {Syscall: "connect", Err: 0x6e}, }, msg: "dialing public load balancer at capz-e2e-9rn1py-4b7f31.southcentralus.cloudapp.azure.com", }, stack: [0x1a1cdf2, 0x1a23ec5, 0x16cf993, 0x472401], }, { error: { cause: { Op: "dial", Net: "tcp", Source: nil, Addr: {IP: "4\xb9\xd3Z", Port: 22, Zone: ""}, Err: {Syscall: "connect", Err: 0x6e}, }, msg: "dialing public load balancer at capz-e2e-9rn1py-4b7f31.southcentralus.cloudapp.azure.com", }, stack: [0x1a1cdf2, 0x1a23ec5, 0x16cf993, 0x472401], }, { error: { cause: { Op: "dial", Net: "tcp", Source: nil, Addr: {IP: "4\xb9\xd3Z", Port: 22, Zone: ""}, Err: {Syscall: "connect", Err: 0x6e}, }, msg: "dialing public load balancer at capz-e2e-9rn1py-4b7f31.southcentralus.cloudapp.azure.com", }, stack: [0x1a1cdf2, 0x1a23ec5, 0x16cf993, 0x472401], }, ], stack: [0x16cf461, 0x16cf3f1, 0x16cf87e, 0x1a16c6d, 0x1a27c0c, 0x89b4e3, 0x8a93ca, 0x1a2860f, 0x883863, 0x883477, 0x882887, 0x8899d1, 0x889112, 0x898831, 0x898347, 0x897b57, 0x89a326, 0x8a8b18, 0x8a884d, 0x1a1fc17, 0x52b3cf, 0x472401], } dialing public load balancer at capz-e2e-9rn1py-4b7f31.southcentralus.cloudapp.azure.com: dial tcp 52.185.211.90:22: connect: connection timed out /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_timesync.go:88from junit.e2e_suite.1.xml
INFO: "With 3 control-plane nodes and 2 worker nodes" started at Tue, 02 Mar 2021 18:35:16 UTC on Ginkgo node 1 of 3 �[1mSTEP�[0m: Creating a namespace for hosting the "create-workload-cluster" test spec INFO: Creating namespace create-workload-cluster-kkcpxz INFO: Creating event watcher for namespace "create-workload-cluster-kkcpxz" INFO: Creating the workload cluster with name "capz-e2e-9rn1py" using the "(default)" template (Kubernetes v1.19.7, 3 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-9rn1py --infrastructure (default) --kubernetes-version v1.19.7 --control-plane-machine-count 3 --worker-machine-count 2 --flavor (default) INFO: Applying the cluster template yaml to the cluster cluster.cluster.x-k8s.io/capz-e2e-9rn1py created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-9rn1py created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-9rn1py-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-9rn1py-control-plane created machinedeployment.cluster.x-k8s.io/capz-e2e-9rn1py-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-9rn1py-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-9rn1py-md-0 created machinehealthcheck.cluster.x-k8s.io/capz-e2e-9rn1py-mhc-0 created configmap/cni-capz-e2e-9rn1py-crs-0 created clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-9rn1py-crs-0 created INFO: Waiting for the cluster infrastructure to be provisioned �[1mSTEP�[0m: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by create-workload-cluster-kkcpxz/capz-e2e-9rn1py-control-plane to be provisioned �[1mSTEP�[0m: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for the remaining control plane machines managed by create-workload-cluster-kkcpxz/capz-e2e-9rn1py-control-plane to be provisioned �[1mSTEP�[0m: Waiting for all control plane nodes to exist INFO: Waiting for control plane create-workload-cluster-kkcpxz/capz-e2e-9rn1py-control-plane to be ready (implies underlying nodes to be ready as well) �[1mSTEP�[0m: Waiting for the control plane to be ready INFO: Waiting for the machine deployments to be provisioned �[1mSTEP�[0m: Waiting for the workload nodes to exist INFO: Waiting for the machine pools to be provisioned �[1mSTEP�[0m: checking that time synchronization is healthy on capz-e2e-9rn1py-control-plane-hvr7d �[1mSTEP�[0m: checking that time synchronization is healthy on capz-e2e-9rn1py-control-plane-mx7gt �[1mSTEP�[0m: checking that time synchronization is healthy on capz-e2e-9rn1py-control-plane-8q4zt �[1mSTEP�[0m: checking that time synchronization is healthy on capz-e2e-9rn1py-md-0-fvgq4 �[1mSTEP�[0m: checking that time synchronization is healthy on capz-e2e-9rn1py-md-0-zb9p7 �[1mSTEP�[0m: Dumping logs from the "capz-e2e-9rn1py" workload cluster �[1mSTEP�[0m: Dumping workload cluster create-workload-cluster-kkcpxz/capz-e2e-9rn1py logs Failed to get logs for machine capz-e2e-9rn1py-control-plane-6hp4c, cluster create-workload-cluster-kkcpxz/capz-e2e-9rn1py: dialing public load balancer at capz-e2e-9rn1py-4b7f31.southcentralus.cloudapp.azure.com: dial tcp 52.185.211.90:22: connect: connection timed out Failed to get logs for machine capz-e2e-9rn1py-control-plane-dmjzf, cluster create-workload-cluster-kkcpxz/capz-e2e-9rn1py: dialing public load balancer at capz-e2e-9rn1py-4b7f31.southcentralus.cloudapp.azure.com: dial tcp 52.185.211.90:22: connect: connection timed out Failed to get logs for machine capz-e2e-9rn1py-control-plane-r4gmm, cluster create-workload-cluster-kkcpxz/capz-e2e-9rn1py: dialing public load balancer at capz-e2e-9rn1py-4b7f31.southcentralus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.196.46:43046->52.185.211.90:22: read: connection timed out �[1mSTEP�[0m: Dumping workload cluster create-workload-cluster-kkcpxz/capz-e2e-9rn1py kube-system pod logs �[1mSTEP�[0m: Fetching kube-system pod logs took 416.261308ms �[1mSTEP�[0m: Dumping workload cluster create-workload-cluster-kkcpxz/capz-e2e-9rn1py Azure activity log �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-9rn1py-control-plane-8q4zt, container kube-apiserver �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-9rn1py-control-plane-hvr7d, container kube-apiserver �[1mSTEP�[0m: Creating log watcher for controller kube-system/calico-node-pg9lb, container calico-node �[1mSTEP�[0m: Creating log watcher for controller kube-system/etcd-capz-e2e-9rn1py-control-plane-mx7gt, container etcd �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-9rn1py-control-plane-mx7gt, container kube-apiserver �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-jr4sk, container kube-proxy �[1mSTEP�[0m: Creating log watcher for controller kube-system/calico-node-6z5g6, container calico-node �[1mSTEP�[0m: Creating log watcher for controller kube-system/etcd-capz-e2e-9rn1py-control-plane-8q4zt, container etcd �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-9rn1py-control-plane-8q4zt, container kube-controller-manager �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-r92tl, container kube-proxy �[1mSTEP�[0m: Creating log watcher for controller kube-system/calico-node-9rg46, container calico-node �[1mSTEP�[0m: Creating log watcher for controller kube-system/coredns-f9fd979d6-7wnqt, container coredns �[1mSTEP�[0m: Creating log watcher for controller kube-system/calico-kube-controllers-8f59968d4-q7hgp, container calico-kube-controllers �[1mSTEP�[0m: Creating log watcher for controller kube-system/coredns-f9fd979d6-sj2jv, container coredns �[1mSTEP�[0m: Creating log watcher for controller kube-system/calico-node-j2tbn, container calico-node �[1mSTEP�[0m: Creating log watcher for controller kube-system/calico-node-4bwjg, container calico-node �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-9rn1py-control-plane-8q4zt, container kube-scheduler �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-9rn1py-control-plane-hvr7d, container kube-scheduler �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-b5ctf, container kube-proxy �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-sc8qn, container kube-proxy �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-9rn1py-control-plane-hvr7d, container kube-controller-manager �[1mSTEP�[0m: Creating log watcher for controller kube-system/etcd-capz-e2e-9rn1py-control-plane-hvr7d, container etcd �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-9rn1py-control-plane-mx7gt, container kube-controller-manager �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-9rn1py-control-plane-mx7gt, container kube-scheduler �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-4tcc4, container kube-proxy �[1mSTEP�[0m: Got error while iterating over activity logs for resource group capz-e2e-9rn1py: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded �[1mSTEP�[0m: Fetching activity logs took 30.000875192s �[1mSTEP�[0m: Dumping all the Cluster API resources in the "create-workload-cluster-kkcpxz" namespace �[1mSTEP�[0m: Deleting all clusters in the create-workload-cluster-kkcpxz namespace �[1mSTEP�[0m: Deleting cluster capz-e2e-9rn1py INFO: Waiting for the Cluster create-workload-cluster-kkcpxz/capz-e2e-9rn1py to be deleted �[1mSTEP�[0m: Waiting for cluster capz-e2e-9rn1py to be deleted �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/calico-node-9rg46, container calico-node: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-9rn1py-control-plane-mx7gt, container kube-controller-manager: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-9rn1py-control-plane-hvr7d, container kube-controller-manager: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/calico-node-4bwjg, container calico-node: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-9rn1py-control-plane-mx7gt, container kube-apiserver: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/calico-node-pg9lb, container calico-node: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/calico-node-j2tbn, container calico-node: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-proxy-b5ctf, container kube-proxy: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-9rn1py-control-plane-mx7gt, container etcd: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-9rn1py-control-plane-8q4zt, container kube-controller-manager: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/calico-node-6z5g6, container calico-node: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-9rn1py-control-plane-hvr7d, container etcd: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-sj2jv, container coredns: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/calico-kube-controllers-8f59968d4-q7hgp, container calico-kube-controllers: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-proxy-r92tl, container kube-proxy: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-9rn1py-control-plane-hvr7d, container kube-apiserver: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-9rn1py-control-plane-mx7gt, container kube-scheduler: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-proxy-sc8qn, container kube-proxy: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-proxy-4tcc4, container kube-proxy: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-9rn1py-control-plane-8q4zt, container kube-scheduler: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/etcd-capz-e2e-9rn1py-control-plane-8q4zt, container etcd: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-proxy-jr4sk, container kube-proxy: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-9rn1py-control-plane-hvr7d, container kube-scheduler: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-7wnqt, container coredns: http2: client connection lost �[1mSTEP�[0m: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-9rn1py-control-plane-8q4zt, container kube-apiserver: http2: client connection lost �[1mSTEP�[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace create-workload-cluster-kkcpxz �[1mSTEP�[0m: Redacting sensitive information from logs INFO: "With 3 control-plane nodes and 2 worker nodes" ran for 44m36s on Ginkgo node 1 of 3
Filter through log files | View test history on testgrid
capz-e2e Workload cluster creation Creating a cluster using a different SP identity with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating a private cluster Creates a public management cluster in the same vnet
capz-e2e Conformance Tests conformance-tests
capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd in a HA cluster
capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd in a single control plane cluster
capz-e2e Running the Cluster API E2E tests Running the MachineDeployment upgrade spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capz-e2e Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e Running the Cluster API E2E tests Should adopt up-to-date control plane Machines without modification Should adopt up-to-date control plane Machines without modification
capz-e2e Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e Workload cluster creation Creating a GPU-enabled cluster with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating a Windows Enabled cluster With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
capz-e2e Workload cluster creation Creating a Windows enabled VMSS cluster with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node