This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 17 succeeded
Started2022-08-17 13:18
Elapsed56m9s
Revisionmain

Test Failures


capi-e2e When testing ClusterClass changes [ClusterClass] Should successfully rollout the managed topology upon changes to the ClusterClass 11m53s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capi\-e2e\sWhen\stesting\sClusterClass\schanges\s\[ClusterClass\]\sShould\ssuccessfully\srollout\sthe\smanaged\stopology\supon\schanges\sto\sthe\sClusterClass$'
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/clusterclass_changes.go:119
Timed out after 600.007s.
Expected
    <*errors.fundamental | 0xc00091d338>: {
        msg: "field \"spec.machineTemplate.nodeDrainTimeout\" should be equal to \"10s\", but is \"30s\"",
        stack: [0x1be4da5, 0x4d7245, 0x4d67bf, 0x7df971, 0x7dfcd9, 0x7e0507, 0x7dfa6b, 0x1be4ad2, 0x1be35a6, 0x7a9cd1, 0x7a96c5, 0x7a8dbb, 0x7aeaaa, 0x7ae4a7, 0x7ceea8, 0x7cebc5, 0x7ce265, 0x7d0632, 0x7dc529, 0x7dc336, 0x1bffb8d, 0x5204a2, 0x46d941],
    }
to be nil
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/clusterclass_changes.go:240
				
				Click to see stdout/stderrfrom junit.e2e_suite.3.xml

Filter through log files | View test history on testgrid


Show 17 Passed Tests

Show 3 Skipped Tests

Error lines from build-log.txt

... skipping 1130 lines ...
INFO: Waiting for correct number of replicas to exist
STEP: Scaling the MachineDeployment down to 1
INFO: Scaling machine deployment md-scale-kvirv1/md-scale-znpr6o-md-0 from 3 to 1 replicas
INFO: Waiting for correct number of replicas to exist
STEP: PASSED!
STEP: Dumping logs from the "md-scale-znpr6o" workload cluster
Failed to get logs for Machine md-scale-znpr6o-control-plane-qxr9r, Cluster md-scale-kvirv1/md-scale-znpr6o: exit status 2
Failed to get logs for Machine md-scale-znpr6o-md-0-68887766b5-77kgl, Cluster md-scale-kvirv1/md-scale-znpr6o: exit status 2
STEP: Dumping all the Cluster API resources in the "md-scale-kvirv1" namespace
STEP: Deleting cluster md-scale-kvirv1/md-scale-znpr6o
STEP: Deleting cluster md-scale-znpr6o
INFO: Waiting for the Cluster md-scale-kvirv1/md-scale-znpr6o to be deleted
STEP: Waiting for cluster md-scale-znpr6o to be deleted
STEP: Deleting namespace used for hosting the "md-scale" test spec
... skipping 81 lines ...
STEP: Ensure API servers are stable before doing move
STEP: Moving the cluster back to bootstrap
STEP: Moving workload clusters
INFO: Waiting for the cluster to be reconciled after moving back to bootstrap
STEP: Waiting for cluster to enter the provisioned phase
STEP: Dumping logs from the "self-hosted-kfps95" workload cluster
Failed to get logs for Machine self-hosted-kfps95-control-plane-mpkmx, Cluster self-hosted-mni9m1/self-hosted-kfps95: exit status 2
Failed to get logs for Machine self-hosted-kfps95-md-0-5789959cfb-zd8k9, Cluster self-hosted-mni9m1/self-hosted-kfps95: exit status 2
STEP: Dumping all the Cluster API resources in the "self-hosted-mni9m1" namespace
STEP: Deleting cluster self-hosted-mni9m1/self-hosted-kfps95
STEP: Deleting cluster self-hosted-kfps95
INFO: Waiting for the Cluster self-hosted-mni9m1/self-hosted-kfps95 to be deleted
STEP: Waiting for cluster self-hosted-kfps95 to be deleted
STEP: Deleting namespace used for hosting the "self-hosted" test spec
... skipping 52 lines ...
STEP: Waiting for deployment node-drain-e8pxnr-unevictable-workload/unevictable-pod-45e to be available
STEP: Scale down the controlplane of the workload cluster and make sure that nodes running workload can be deleted even the draining process is blocked.
INFO: Scaling controlplane node-drain-e8pxnr/node-drain-av7jvk-control-plane from 3 to 1 replicas
INFO: Waiting for correct number of replicas to exist
STEP: PASSED!
STEP: Dumping logs from the "node-drain-av7jvk" workload cluster
Failed to get logs for Machine node-drain-av7jvk-control-plane-vn4w4, Cluster node-drain-e8pxnr/node-drain-av7jvk: exit status 2
STEP: Dumping all the Cluster API resources in the "node-drain-e8pxnr" namespace
STEP: Deleting cluster node-drain-e8pxnr/node-drain-av7jvk
STEP: Deleting cluster node-drain-av7jvk
INFO: Waiting for the Cluster node-drain-e8pxnr/node-drain-av7jvk to be deleted
STEP: Waiting for cluster node-drain-av7jvk to be deleted
STEP: Deleting namespace used for hosting the "node-drain" test spec
... skipping 81 lines ...
STEP: Ensure API servers are stable before doing move
STEP: Moving the cluster back to bootstrap
STEP: Moving workload clusters
INFO: Waiting for the cluster to be reconciled after moving back to bootstrap
STEP: Waiting for cluster to enter the provisioned phase
STEP: Dumping logs from the "self-hosted-dq8r5i" workload cluster
Failed to get logs for Machine self-hosted-dq8r5i-5vnfh-hmqc2, Cluster self-hosted-7b4xbq/self-hosted-dq8r5i: exit status 2
Failed to get logs for Machine self-hosted-dq8r5i-md-0-b9mzp-5bbc85c5cb-fmpst, Cluster self-hosted-7b4xbq/self-hosted-dq8r5i: exit status 2
STEP: Dumping all the Cluster API resources in the "self-hosted-7b4xbq" namespace
STEP: Deleting cluster self-hosted-7b4xbq/self-hosted-dq8r5i
STEP: Deleting cluster self-hosted-dq8r5i
INFO: Waiting for the Cluster self-hosted-7b4xbq/self-hosted-dq8r5i to be deleted
STEP: Waiting for cluster self-hosted-dq8r5i to be deleted
STEP: Deleting namespace used for hosting the "self-hosted" test spec
... skipping 52 lines ...
INFO: Waiting for etcd to have the upgraded image tag
INFO: Waiting for Kubernetes versions of machines in MachineDeployment k8s-upgrade-and-conformance-rjmsld/k8s-upgrade-and-conformance-1h47e9-md-0-8k8m8 to be upgraded to v1.24.0
INFO: Ensuring all MachineDeployment Machines have upgraded kubernetes version v1.24.0
STEP: Waiting until nodes are ready
STEP: PASSED!
STEP: Dumping logs from the "k8s-upgrade-and-conformance-1h47e9" workload cluster
Failed to get logs for Machine k8s-upgrade-and-conformance-1h47e9-45d5z-fzhlh, Cluster k8s-upgrade-and-conformance-rjmsld/k8s-upgrade-and-conformance-1h47e9: exit status 2
Failed to get logs for Machine k8s-upgrade-and-conformance-1h47e9-md-0-8k8m8-c9b7fdcc4-sjkjn, Cluster k8s-upgrade-and-conformance-rjmsld/k8s-upgrade-and-conformance-1h47e9: exit status 2
Failed to get logs for Machine k8s-upgrade-and-conformance-1h47e9-md-0-8k8m8-c9b7fdcc4-v2xt5, Cluster k8s-upgrade-and-conformance-rjmsld/k8s-upgrade-and-conformance-1h47e9: exit status 2
STEP: Dumping all the Cluster API resources in the "k8s-upgrade-and-conformance-rjmsld" namespace
STEP: Deleting cluster k8s-upgrade-and-conformance-rjmsld/k8s-upgrade-and-conformance-1h47e9
STEP: Deleting cluster k8s-upgrade-and-conformance-1h47e9
INFO: Waiting for the Cluster k8s-upgrade-and-conformance-rjmsld/k8s-upgrade-and-conformance-1h47e9 to be deleted
STEP: Waiting for cluster k8s-upgrade-and-conformance-1h47e9 to be deleted
STEP: Deleting namespace used for hosting the "k8s-upgrade-and-conformance" test spec
... skipping 48 lines ...
Patching MachineHealthCheck unhealthy condition to one of the nodes
INFO: Patching the node condition to the node
Waiting for remediation
Waiting until the node with unhealthy node condition is remediated
STEP: PASSED!
STEP: Dumping logs from the "mhc-remediation-4heu0a" workload cluster
Failed to get logs for Machine mhc-remediation-4heu0a-control-plane-t5nlm, Cluster mhc-remediation-imkm1k/mhc-remediation-4heu0a: exit status 2
Failed to get logs for Machine mhc-remediation-4heu0a-md-0-c4f7ccdb4-nkgj2, Cluster mhc-remediation-imkm1k/mhc-remediation-4heu0a: exit status 2
STEP: Dumping all the Cluster API resources in the "mhc-remediation-imkm1k" namespace
STEP: Deleting cluster mhc-remediation-imkm1k/mhc-remediation-4heu0a
STEP: Deleting cluster mhc-remediation-4heu0a
INFO: Waiting for the Cluster mhc-remediation-imkm1k/mhc-remediation-4heu0a to be deleted
STEP: Waiting for cluster mhc-remediation-4heu0a to be deleted
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
... skipping 54 lines ...
INFO: Waiting for etcd to have the upgraded image tag
INFO: Waiting for Kubernetes versions of machines in MachineDeployment k8s-upgrade-and-conformance-ryprfg/k8s-upgrade-and-conformance-0q6y26-md-0-2mqvf to be upgraded to v1.24.0
INFO: Ensuring all MachineDeployment Machines have upgraded kubernetes version v1.24.0
STEP: Waiting until nodes are ready
STEP: PASSED!
STEP: Dumping logs from the "k8s-upgrade-and-conformance-0q6y26" workload cluster
Failed to get logs for Machine k8s-upgrade-and-conformance-0q6y26-4xzmm-whzbs, Cluster k8s-upgrade-and-conformance-ryprfg/k8s-upgrade-and-conformance-0q6y26: exit status 2
Failed to get logs for Machine k8s-upgrade-and-conformance-0q6y26-4xzmm-xd9ql, Cluster k8s-upgrade-and-conformance-ryprfg/k8s-upgrade-and-conformance-0q6y26: exit status 2
Failed to get logs for Machine k8s-upgrade-and-conformance-0q6y26-4xzmm-z8wrz, Cluster k8s-upgrade-and-conformance-ryprfg/k8s-upgrade-and-conformance-0q6y26: exit status 2
Failed to get logs for Machine k8s-upgrade-and-conformance-0q6y26-md-0-2mqvf-785c585f59-4f2tx, Cluster k8s-upgrade-and-conformance-ryprfg/k8s-upgrade-and-conformance-0q6y26: exit status 2
STEP: Dumping all the Cluster API resources in the "k8s-upgrade-and-conformance-ryprfg" namespace
STEP: Deleting cluster k8s-upgrade-and-conformance-ryprfg/k8s-upgrade-and-conformance-0q6y26
STEP: Deleting cluster k8s-upgrade-and-conformance-0q6y26
INFO: Waiting for the Cluster k8s-upgrade-and-conformance-ryprfg/k8s-upgrade-and-conformance-0q6y26 to be deleted
STEP: Waiting for cluster k8s-upgrade-and-conformance-0q6y26 to be deleted
STEP: Deleting namespace used for hosting the "k8s-upgrade-and-conformance" test spec
... skipping 40 lines ...
INFO: Waiting for the machine deployments to be provisioned
STEP: Waiting for the workload nodes to exist
STEP: Checking all the machines controlled by quick-start-eblyfk-md-0 are in the "fd4" failure domain
INFO: Waiting for the machine pools to be provisioned
STEP: PASSED!
STEP: Dumping logs from the "quick-start-eblyfk" workload cluster
Failed to get logs for Machine quick-start-eblyfk-control-plane-48d65, Cluster quick-start-ps3yfw/quick-start-eblyfk: exit status 2
Failed to get logs for Machine quick-start-eblyfk-md-0-7cb5f747f7-w95b2, Cluster quick-start-ps3yfw/quick-start-eblyfk: exit status 2
STEP: Dumping all the Cluster API resources in the "quick-start-ps3yfw" namespace
STEP: Deleting cluster quick-start-ps3yfw/quick-start-eblyfk
STEP: Deleting cluster quick-start-eblyfk
INFO: Waiting for the Cluster quick-start-ps3yfw/quick-start-eblyfk to be deleted
STEP: Waiting for cluster quick-start-eblyfk to be deleted
STEP: Deleting namespace used for hosting the "quick-start" test spec
... skipping 50 lines ...
Patching MachineHealthCheck unhealthy condition to one of the nodes
INFO: Patching the node condition to the node
Waiting for remediation
Waiting until the node with unhealthy node condition is remediated
STEP: PASSED!
STEP: Dumping logs from the "mhc-remediation-0dlpzr" workload cluster
Failed to get logs for Machine mhc-remediation-0dlpzr-control-plane-blrhp, Cluster mhc-remediation-2fd1ri/mhc-remediation-0dlpzr: exit status 2
Failed to get logs for Machine mhc-remediation-0dlpzr-control-plane-dl24w, Cluster mhc-remediation-2fd1ri/mhc-remediation-0dlpzr: exit status 2
Failed to get logs for Machine mhc-remediation-0dlpzr-control-plane-grj7g, Cluster mhc-remediation-2fd1ri/mhc-remediation-0dlpzr: exit status 2
Failed to get logs for Machine mhc-remediation-0dlpzr-md-0-6d7f6b9dcd-twgp4, Cluster mhc-remediation-2fd1ri/mhc-remediation-0dlpzr: exit status 2
STEP: Dumping all the Cluster API resources in the "mhc-remediation-2fd1ri" namespace
STEP: Deleting cluster mhc-remediation-2fd1ri/mhc-remediation-0dlpzr
STEP: Deleting cluster mhc-remediation-0dlpzr
INFO: Waiting for the Cluster mhc-remediation-2fd1ri/mhc-remediation-0dlpzr to be deleted
STEP: Waiting for cluster mhc-remediation-0dlpzr to be deleted
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
... skipping 48 lines ...
STEP: Waiting for the machine pool workload nodes
STEP: Scaling the machine pool to zero
INFO: Patching the replica count in Machine Pool machine-pool-szq2dg/machine-pool-asv34f-mp-0
STEP: Waiting for the machine pool workload nodes
STEP: PASSED!
STEP: Dumping logs from the "machine-pool-asv34f" workload cluster
Failed to get logs for Machine machine-pool-asv34f-control-plane-c29zk, Cluster machine-pool-szq2dg/machine-pool-asv34f: exit status 2
STEP: Dumping all the Cluster API resources in the "machine-pool-szq2dg" namespace
STEP: Deleting cluster machine-pool-szq2dg/machine-pool-asv34f
STEP: Deleting cluster machine-pool-asv34f
INFO: Waiting for the Cluster machine-pool-szq2dg/machine-pool-asv34f to be deleted
STEP: Waiting for cluster machine-pool-asv34f to be deleted
STEP: Deleting namespace used for hosting the "machine-pool" test spec
... skipping 46 lines ...
INFO: Waiting for rolling upgrade to start.
INFO: Waiting for MachineDeployment rolling upgrade to start
INFO: Waiting for rolling upgrade to complete.
INFO: Waiting for MachineDeployment rolling upgrade to complete
STEP: PASSED!
STEP: Dumping logs from the "md-rollout-eaeaso" workload cluster
Failed to get logs for Machine md-rollout-eaeaso-control-plane-hhvtv, Cluster md-rollout-iy93rm/md-rollout-eaeaso: exit status 2
Failed to get logs for Machine md-rollout-eaeaso-md-0-77dcc554cc-9xjj9, Cluster md-rollout-iy93rm/md-rollout-eaeaso: exit status 2
STEP: Dumping all the Cluster API resources in the "md-rollout-iy93rm" namespace
STEP: Deleting cluster md-rollout-iy93rm/md-rollout-eaeaso
STEP: Deleting cluster md-rollout-eaeaso
INFO: Waiting for the Cluster md-rollout-iy93rm/md-rollout-eaeaso to be deleted
STEP: Waiting for cluster md-rollout-eaeaso to be deleted
STEP: Deleting namespace used for hosting the "md-rollout" test spec
... skipping 81 lines ...
INFO: Patching the new Kubernetes version to Machine Pool k8s-upgrade-with-runtimesdk-36w304/k8s-upgrade-with-runtimesdk-j7c61s-mp-0
INFO: Waiting for Kubernetes versions of machines in MachinePool k8s-upgrade-with-runtimesdk-36w304/k8s-upgrade-with-runtimesdk-j7c61s-mp-0 to be upgraded from v1.23.6 to v1.24.0
INFO: Ensuring all MachinePool Instances have upgraded kubernetes version v1.24.0
STEP: Waiting until nodes are ready
STEP: Dumping resources and deleting the workload cluster
STEP: Deleting the workload cluster
Failed to get logs for Machine k8s-upgrade-with-runtimesdk-j7c61s-md-0-w476x-58894f8659-m4mgb, Cluster k8s-upgrade-with-runtimesdk-36w304/k8s-upgrade-with-runtimesdk-j7c61s: exit status 2
Failed to get logs for Machine k8s-upgrade-with-runtimesdk-j7c61s-md-0-w476x-58894f8659-zrdm2, Cluster k8s-upgrade-with-runtimesdk-36w304/k8s-upgrade-with-runtimesdk-j7c61s: exit status 2
Failed to get logs for Machine k8s-upgrade-with-runtimesdk-j7c61s-sgkld-7jt4r, Cluster k8s-upgrade-with-runtimesdk-36w304/k8s-upgrade-with-runtimesdk-j7c61s: exit status 2
Failed to get logs for MachinePool k8s-upgrade-with-runtimesdk-j7c61s-mp-0, Cluster k8s-upgrade-with-runtimesdk-36w304/k8s-upgrade-with-runtimesdk-j7c61s: exit status 2
STEP: Deleting the workload cluster
STEP: Deleting cluster k8s-upgrade-with-runtimesdk-j7c61s
INFO: Blocking with BeforeClusterDelete hook
STEP: Setting BeforeClusterDelete response to Status:Success to unblock the reconciliation
STEP: Checking all lifecycle hooks have been called
STEP: PASSED!
... skipping 44 lines ...
INFO: Waiting for the machine deployments to be provisioned
STEP: Waiting for the workload nodes to exist
STEP: Checking all the machines controlled by quick-start-54wfos-md-0-89c8v are in the "fd4" failure domain
INFO: Waiting for the machine pools to be provisioned
STEP: PASSED!
STEP: Dumping logs from the "quick-start-54wfos" workload cluster
Failed to get logs for Machine quick-start-54wfos-66lsq-kxdbx, Cluster quick-start-un7ysr/quick-start-54wfos: exit status 2
Failed to get logs for Machine quick-start-54wfos-md-0-89c8v-5cf7554885-7mxjh, Cluster quick-start-un7ysr/quick-start-54wfos: exit status 2
STEP: Dumping all the Cluster API resources in the "quick-start-un7ysr" namespace
STEP: Deleting cluster quick-start-un7ysr/quick-start-54wfos
STEP: Deleting cluster quick-start-54wfos
INFO: Waiting for the Cluster quick-start-un7ysr/quick-start-54wfos to be deleted
STEP: Waiting for cluster quick-start-54wfos to be deleted
STEP: Deleting namespace used for hosting the "quick-start" test spec
... skipping 37 lines ...

STEP: Waiting for the control plane to be ready
STEP: Taking stable ownership of the Machines
STEP: Taking ownership of the cluster's PKI material
STEP: PASSED!
STEP: Dumping logs from the "kcp-adoption-42b8ql" workload cluster
Failed to get logs for Machine kcp-adoption-42b8ql-control-plane-0, Cluster kcp-adoption-vem108/kcp-adoption-42b8ql: exit status 2
STEP: Dumping all the Cluster API resources in the "kcp-adoption-vem108" namespace
STEP: Deleting cluster kcp-adoption-vem108/kcp-adoption-42b8ql
STEP: Deleting cluster kcp-adoption-42b8ql
INFO: Waiting for the Cluster kcp-adoption-vem108/kcp-adoption-42b8ql to be deleted
STEP: Waiting for cluster kcp-adoption-42b8ql to be deleted
STEP: Deleting namespace used for hosting the "kcp-adoption" test spec
... skipping 54 lines ...
INFO: Waiting for etcd to have the upgraded image tag
INFO: Waiting for Kubernetes versions of machines in MachineDeployment k8s-upgrade-and-conformance-upvfc9/k8s-upgrade-and-conformance-5orpp0-md-0-vmp24 to be upgraded to v1.24.0
INFO: Ensuring all MachineDeployment Machines have upgraded kubernetes version v1.24.0
STEP: Waiting until nodes are ready
STEP: PASSED!
STEP: Dumping logs from the "k8s-upgrade-and-conformance-5orpp0" workload cluster
Failed to get logs for Machine k8s-upgrade-and-conformance-5orpp0-md-0-vmp24-6988bc576f-sq9dp, Cluster k8s-upgrade-and-conformance-upvfc9/k8s-upgrade-and-conformance-5orpp0: exit status 2
Failed to get logs for Machine k8s-upgrade-and-conformance-5orpp0-rgj86-b7f6v, Cluster k8s-upgrade-and-conformance-upvfc9/k8s-upgrade-and-conformance-5orpp0: exit status 2
Failed to get logs for Machine k8s-upgrade-and-conformance-5orpp0-rgj86-dsr2w, Cluster k8s-upgrade-and-conformance-upvfc9/k8s-upgrade-and-conformance-5orpp0: exit status 2
Failed to get logs for Machine k8s-upgrade-and-conformance-5orpp0-rgj86-gsgcs, Cluster k8s-upgrade-and-conformance-upvfc9/k8s-upgrade-and-conformance-5orpp0: exit status 2
STEP: Dumping all the Cluster API resources in the "k8s-upgrade-and-conformance-upvfc9" namespace
STEP: Deleting cluster k8s-upgrade-and-conformance-upvfc9/k8s-upgrade-and-conformance-5orpp0
STEP: Deleting cluster k8s-upgrade-and-conformance-5orpp0
INFO: Waiting for the Cluster k8s-upgrade-and-conformance-upvfc9/k8s-upgrade-and-conformance-5orpp0 to be deleted
STEP: Waiting for cluster k8s-upgrade-and-conformance-5orpp0 to be deleted
STEP: Deleting namespace used for hosting the "k8s-upgrade-and-conformance" test spec
... skipping 40 lines ...
INFO: Waiting for the machine deployments to be provisioned
STEP: Waiting for the workload nodes to exist
STEP: Checking all the machines controlled by quick-start-w7z9ny-md-0 are in the "fd4" failure domain
INFO: Waiting for the machine pools to be provisioned
STEP: PASSED!
STEP: Dumping logs from the "quick-start-w7z9ny" workload cluster
Failed to get logs for Machine quick-start-w7z9ny-control-plane-xng8j, Cluster quick-start-u0u5m1/quick-start-w7z9ny: exit status 2
Failed to get logs for Machine quick-start-w7z9ny-md-0-696b49c6bd-h7fbf, Cluster quick-start-u0u5m1/quick-start-w7z9ny: exit status 2
STEP: Dumping all the Cluster API resources in the "quick-start-u0u5m1" namespace
STEP: Deleting cluster quick-start-u0u5m1/quick-start-w7z9ny
STEP: Deleting cluster quick-start-w7z9ny
INFO: Waiting for the Cluster quick-start-u0u5m1/quick-start-w7z9ny to be deleted
STEP: Waiting for cluster quick-start-w7z9ny to be deleted
STEP: Deleting namespace used for hosting the "quick-start" test spec
... skipping 115 lines ...
STEP: Deleting cluster clusterctl-upgrade/clusterctl-upgrade-frxgw5
STEP: Deleting namespace clusterctl-upgrade used for hosting the "clusterctl-upgrade" test
INFO: Deleting namespace clusterctl-upgrade
STEP: Deleting providers
INFO: clusterctl delete --all
STEP: Dumping logs from the "clusterctl-upgrade-frxgw5" workload cluster
Failed to get logs for Machine clusterctl-upgrade-frxgw5-control-plane-5fhsz, Cluster clusterctl-upgrade-sc4pvj/clusterctl-upgrade-frxgw5: exit status 2
Failed to get logs for Machine clusterctl-upgrade-frxgw5-md-0-79c9fdf9d9-flg67, Cluster clusterctl-upgrade-sc4pvj/clusterctl-upgrade-frxgw5: exit status 2
STEP: Dumping all the Cluster API resources in the "clusterctl-upgrade-sc4pvj" namespace
STEP: Deleting cluster clusterctl-upgrade-sc4pvj/clusterctl-upgrade-frxgw5
STEP: Deleting cluster clusterctl-upgrade-frxgw5
INFO: Waiting for the Cluster clusterctl-upgrade-sc4pvj/clusterctl-upgrade-frxgw5 to be deleted
STEP: Waiting for cluster clusterctl-upgrade-frxgw5 to be deleted
STEP: Deleting namespace used for hosting the "clusterctl-upgrade" test spec
... skipping 42 lines ...
STEP: Checking all the machines controlled by clusterclass-changes-t1tfyz-md-0-vdbkd are in the "fd4" failure domain
INFO: Waiting for the machine pools to be provisioned
STEP: Modifying the control plane configuration in ClusterClass and wait for changes to be applied to the control plane object
INFO: Modifying the ControlPlaneTemplate of ClusterClass clusterclass-changes-esjb78/quick-start
INFO: Waiting for ControlPlane rollout to complete.
STEP: Dumping logs from the "clusterclass-changes-t1tfyz" workload cluster
Failed to get logs for Machine clusterclass-changes-t1tfyz-g9wfb-94km7, Cluster clusterclass-changes-esjb78/clusterclass-changes-t1tfyz: exit status 2
Failed to get logs for Machine clusterclass-changes-t1tfyz-md-0-vdbkd-775bf556cc-lz8vq, Cluster clusterclass-changes-esjb78/clusterclass-changes-t1tfyz: exit status 2
STEP: Dumping all the Cluster API resources in the "clusterclass-changes-esjb78" namespace
STEP: Deleting cluster clusterclass-changes-esjb78/clusterclass-changes-t1tfyz
STEP: Deleting cluster clusterclass-changes-t1tfyz
INFO: Waiting for the Cluster clusterclass-changes-esjb78/clusterclass-changes-t1tfyz to be deleted
STEP: Waiting for cluster clusterclass-changes-t1tfyz to be deleted
STEP: Deleting namespace used for hosting the "clusterclass-changes" test spec
... skipping 48 lines ...
  testing.tRunner(0xc000502ea0, 0x22db160)
  	/usr/local/go/src/testing/testing.go:1439 +0x102
  created by testing.(*T).Run
  	/usr/local/go/src/testing/testing.go:1486 +0x35f
------------------------------
STEP: Dumping logs from the bootstrap cluster
Failed to get logs for the bootstrap cluster node test-q7jigr-control-plane: exit status 2
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] When testing ClusterClass changes [ClusterClass] [It] Should successfully rollout the managed topology upon changes to the ClusterClass 
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/clusterclass_changes.go:240

Ran 18 of 21 Specs in 2981.080 seconds
FAIL! -- 17 Passed | 1 Failed | 0 Pending | 3 Skipped


Ginkgo ran 1 suite in 50m47.048925802s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
  - For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta
  - To comment, chime in at https://github.com/onsi/ginkgo/issues/711

To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc
make: *** [Makefile:129: run] Error 1
make: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e'
+ cleanup
++ pgrep -f 'docker events'
+ kill 25843
++ pgrep -f 'ctr -n moby events'
+ kill 25844
... skipping 21 lines ...