This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 9 succeeded
Started2022-08-17 17:20
Elapsed36m15s
Revisionmain

Test Failures


capi-e2e When testing ClusterClass changes [ClusterClass] Should successfully rollout the managed topology upon changes to the ClusterClass 11m45s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capi\-e2e\sWhen\stesting\sClusterClass\schanges\s\[ClusterClass\]\sShould\ssuccessfully\srollout\sthe\smanaged\stopology\supon\schanges\sto\sthe\sClusterClass$'
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/clusterclass_changes.go:119
Timed out after 600.007s.
Expected
    <*errors.fundamental | 0xc000d107e0>: {
        msg: "field \"spec.machineTemplate.nodeDrainTimeout\" should be equal to \"10s\", but is \"30s\"",
        stack: [0x1be4da5, 0x4d7245, 0x4d67bf, 0x7df971, 0x7dfcd9, 0x7e0507, 0x7dfa6b, 0x1be4ad2, 0x1be35a6, 0x7a9cd1, 0x7a96c5, 0x7a8dbb, 0x7aeaaa, 0x7ae4a7, 0x7ceea8, 0x7cebc5, 0x7ce265, 0x7d0632, 0x7dc529, 0x7dc336, 0x1bffb8d, 0x5204a2, 0x46d941],
    }
to be nil
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/clusterclass_changes.go:240
				
				Click to see stdout/stderrfrom junit.e2e_suite.2.xml

Filter through log files | View test history on testgrid


Show 9 Passed Tests

Show 11 Skipped Tests

Error lines from build-log.txt

... skipping 1136 lines ...
Patching MachineHealthCheck unhealthy condition to one of the nodes
INFO: Patching the node condition to the node
Waiting for remediation
Waiting until the node with unhealthy node condition is remediated
STEP: PASSED!
STEP: Dumping logs from the "mhc-remediation-c3yi6x" workload cluster
Failed to get logs for Machine mhc-remediation-c3yi6x-control-plane-2pwkd, Cluster mhc-remediation-7tsmln/mhc-remediation-c3yi6x: exit status 2
Failed to get logs for Machine mhc-remediation-c3yi6x-md-0-dcb7f8d9f-tzdml, Cluster mhc-remediation-7tsmln/mhc-remediation-c3yi6x: exit status 2
STEP: Dumping all the Cluster API resources in the "mhc-remediation-7tsmln" namespace
STEP: Deleting cluster mhc-remediation-7tsmln/mhc-remediation-c3yi6x
STEP: Deleting cluster mhc-remediation-c3yi6x
INFO: Waiting for the Cluster mhc-remediation-7tsmln/mhc-remediation-c3yi6x to be deleted
STEP: Waiting for cluster mhc-remediation-c3yi6x to be deleted
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
... skipping 35 lines ...

STEP: Waiting for the control plane to be ready
STEP: Taking stable ownership of the Machines
STEP: Taking ownership of the cluster's PKI material
STEP: PASSED!
STEP: Dumping logs from the "kcp-adoption-lfhxyk" workload cluster
Failed to get logs for Machine kcp-adoption-lfhxyk-control-plane-0, Cluster kcp-adoption-kn121a/kcp-adoption-lfhxyk: exit status 2
STEP: Dumping all the Cluster API resources in the "kcp-adoption-kn121a" namespace
STEP: Deleting cluster kcp-adoption-kn121a/kcp-adoption-lfhxyk
STEP: Deleting cluster kcp-adoption-lfhxyk
INFO: Waiting for the Cluster kcp-adoption-kn121a/kcp-adoption-lfhxyk to be deleted
STEP: Waiting for cluster kcp-adoption-lfhxyk to be deleted
STEP: Deleting namespace used for hosting the "kcp-adoption" test spec
... skipping 50 lines ...
Patching MachineHealthCheck unhealthy condition to one of the nodes
INFO: Patching the node condition to the node
Waiting for remediation
Waiting until the node with unhealthy node condition is remediated
STEP: PASSED!
STEP: Dumping logs from the "mhc-remediation-uvmywj" workload cluster
Failed to get logs for Machine mhc-remediation-uvmywj-control-plane-c5mgc, Cluster mhc-remediation-rrmqkm/mhc-remediation-uvmywj: exit status 2
Failed to get logs for Machine mhc-remediation-uvmywj-control-plane-l6vzl, Cluster mhc-remediation-rrmqkm/mhc-remediation-uvmywj: exit status 2
Failed to get logs for Machine mhc-remediation-uvmywj-control-plane-vfbz2, Cluster mhc-remediation-rrmqkm/mhc-remediation-uvmywj: exit status 2
Failed to get logs for Machine mhc-remediation-uvmywj-md-0-cd4c956f4-nbzqr, Cluster mhc-remediation-rrmqkm/mhc-remediation-uvmywj: exit status 2
STEP: Dumping all the Cluster API resources in the "mhc-remediation-rrmqkm" namespace
STEP: Deleting cluster mhc-remediation-rrmqkm/mhc-remediation-uvmywj
STEP: Deleting cluster mhc-remediation-uvmywj
INFO: Waiting for the Cluster mhc-remediation-rrmqkm/mhc-remediation-uvmywj to be deleted
STEP: Waiting for cluster mhc-remediation-uvmywj to be deleted
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
... skipping 48 lines ...
STEP: Waiting for the machine pool workload nodes
STEP: Scaling the machine pool to zero
INFO: Patching the replica count in Machine Pool machine-pool-kqsya6/machine-pool-4gtz0u-mp-0
STEP: Waiting for the machine pool workload nodes
STEP: PASSED!
STEP: Dumping logs from the "machine-pool-4gtz0u" workload cluster
Failed to get logs for Machine machine-pool-4gtz0u-control-plane-xh4nz, Cluster machine-pool-kqsya6/machine-pool-4gtz0u: exit status 2
STEP: Dumping all the Cluster API resources in the "machine-pool-kqsya6" namespace
STEP: Deleting cluster machine-pool-kqsya6/machine-pool-4gtz0u
STEP: Deleting cluster machine-pool-4gtz0u
INFO: Waiting for the Cluster machine-pool-kqsya6/machine-pool-4gtz0u to be deleted
STEP: Waiting for cluster machine-pool-4gtz0u to be deleted
STEP: Deleting namespace used for hosting the "machine-pool" test spec
... skipping 81 lines ...
INFO: Patching the new Kubernetes version to Machine Pool k8s-upgrade-with-runtimesdk-118tsn/k8s-upgrade-with-runtimesdk-4pryg5-mp-0
INFO: Waiting for Kubernetes versions of machines in MachinePool k8s-upgrade-with-runtimesdk-118tsn/k8s-upgrade-with-runtimesdk-4pryg5-mp-0 to be upgraded from v1.23.6 to v1.24.0
INFO: Ensuring all MachinePool Instances have upgraded kubernetes version v1.24.0
STEP: Waiting until nodes are ready
STEP: Dumping resources and deleting the workload cluster
STEP: Deleting the workload cluster
Failed to get logs for Machine k8s-upgrade-with-runtimesdk-4pryg5-b4mhs-4tn2l, Cluster k8s-upgrade-with-runtimesdk-118tsn/k8s-upgrade-with-runtimesdk-4pryg5: exit status 2
Failed to get logs for Machine k8s-upgrade-with-runtimesdk-4pryg5-md-0-qp7mv-c844b99fd-895wc, Cluster k8s-upgrade-with-runtimesdk-118tsn/k8s-upgrade-with-runtimesdk-4pryg5: exit status 2
Failed to get logs for Machine k8s-upgrade-with-runtimesdk-4pryg5-md-0-qp7mv-c844b99fd-rfww2, Cluster k8s-upgrade-with-runtimesdk-118tsn/k8s-upgrade-with-runtimesdk-4pryg5: exit status 2
Failed to get logs for MachinePool k8s-upgrade-with-runtimesdk-4pryg5-mp-0, Cluster k8s-upgrade-with-runtimesdk-118tsn/k8s-upgrade-with-runtimesdk-4pryg5: exit status 2
STEP: Deleting the workload cluster
STEP: Deleting cluster k8s-upgrade-with-runtimesdk-4pryg5
INFO: Blocking with BeforeClusterDelete hook
STEP: Setting BeforeClusterDelete response to Status:Success to unblock the reconciliation
STEP: Checking all lifecycle hooks have been called
STEP: PASSED!
... skipping 85 lines ...
STEP: Ensure API servers are stable before doing move
STEP: Moving the cluster back to bootstrap
STEP: Moving workload clusters
INFO: Waiting for the cluster to be reconciled after moving back to bootstrap
STEP: Waiting for cluster to enter the provisioned phase
STEP: Dumping logs from the "self-hosted-d2bw4j" workload cluster
Failed to get logs for Machine self-hosted-d2bw4j-fw772-dvbt5, Cluster self-hosted-kjyv61/self-hosted-d2bw4j: exit status 2
Failed to get logs for Machine self-hosted-d2bw4j-md-0-n67j5-66d5858c68-xftr7, Cluster self-hosted-kjyv61/self-hosted-d2bw4j: exit status 2
STEP: Dumping all the Cluster API resources in the "self-hosted-kjyv61" namespace
STEP: Deleting cluster self-hosted-kjyv61/self-hosted-d2bw4j
STEP: Deleting cluster self-hosted-d2bw4j
INFO: Waiting for the Cluster self-hosted-kjyv61/self-hosted-d2bw4j to be deleted
STEP: Waiting for cluster self-hosted-d2bw4j to be deleted
STEP: Deleting namespace used for hosting the "self-hosted" test spec
... skipping 40 lines ...
INFO: Waiting for the machine deployments to be provisioned
STEP: Waiting for the workload nodes to exist
STEP: Checking all the machines controlled by quick-start-m76zcv-md-0 are in the "fd4" failure domain
INFO: Waiting for the machine pools to be provisioned
STEP: PASSED!
STEP: Dumping logs from the "quick-start-m76zcv" workload cluster
Failed to get logs for Machine quick-start-m76zcv-control-plane-dhgps, Cluster quick-start-dutqra/quick-start-m76zcv: exit status 2
Failed to get logs for Machine quick-start-m76zcv-md-0-75cf8d4d9f-ht6sl, Cluster quick-start-dutqra/quick-start-m76zcv: exit status 2
STEP: Dumping all the Cluster API resources in the "quick-start-dutqra" namespace
STEP: Deleting cluster quick-start-dutqra/quick-start-m76zcv
STEP: Deleting cluster quick-start-m76zcv
INFO: Waiting for the Cluster quick-start-dutqra/quick-start-m76zcv to be deleted
STEP: Waiting for cluster quick-start-m76zcv to be deleted
STEP: Deleting namespace used for hosting the "quick-start" test spec
... skipping 42 lines ...
STEP: Checking all the machines controlled by clusterclass-changes-nh8euo-md-0-hxhfk are in the "fd4" failure domain
INFO: Waiting for the machine pools to be provisioned
STEP: Modifying the control plane configuration in ClusterClass and wait for changes to be applied to the control plane object
INFO: Modifying the ControlPlaneTemplate of ClusterClass clusterclass-changes-z69rxz/quick-start
INFO: Waiting for ControlPlane rollout to complete.
STEP: Dumping logs from the "clusterclass-changes-nh8euo" workload cluster
Failed to get logs for Machine clusterclass-changes-nh8euo-k5xxn-bfs54, Cluster clusterclass-changes-z69rxz/clusterclass-changes-nh8euo: exit status 2
Failed to get logs for Machine clusterclass-changes-nh8euo-md-0-hxhfk-65dbbcb55d-wc7th, Cluster clusterclass-changes-z69rxz/clusterclass-changes-nh8euo: exit status 2
STEP: Dumping all the Cluster API resources in the "clusterclass-changes-z69rxz" namespace
STEP: Deleting cluster clusterclass-changes-z69rxz/clusterclass-changes-nh8euo
STEP: Deleting cluster clusterclass-changes-nh8euo
INFO: Waiting for the Cluster clusterclass-changes-z69rxz/clusterclass-changes-nh8euo to be deleted
STEP: Waiting for cluster clusterclass-changes-nh8euo to be deleted
STEP: Deleting namespace used for hosting the "clusterclass-changes" test spec
... skipping 98 lines ...
INFO: Waiting for etcd to have the upgraded image tag
INFO: Waiting for Kubernetes versions of machines in MachineDeployment k8s-upgrade-and-conformance-h1eam6/k8s-upgrade-and-conformance-v1j3le-md-0-k46qh to be upgraded to v1.24.0
INFO: Ensuring all MachineDeployment Machines have upgraded kubernetes version v1.24.0
STEP: Waiting until nodes are ready
STEP: PASSED!
STEP: Dumping logs from the "k8s-upgrade-and-conformance-v1j3le" workload cluster
Failed to get logs for Machine k8s-upgrade-and-conformance-v1j3le-8wrj2-pmgfz, Cluster k8s-upgrade-and-conformance-h1eam6/k8s-upgrade-and-conformance-v1j3le: exit status 2
Failed to get logs for Machine k8s-upgrade-and-conformance-v1j3le-md-0-k46qh-6f9cc84d64-c8fz7, Cluster k8s-upgrade-and-conformance-h1eam6/k8s-upgrade-and-conformance-v1j3le: exit status 2
Failed to get logs for Machine k8s-upgrade-and-conformance-v1j3le-md-0-k46qh-6f9cc84d64-gn2x4, Cluster k8s-upgrade-and-conformance-h1eam6/k8s-upgrade-and-conformance-v1j3le: exit status 2
STEP: Dumping all the Cluster API resources in the "k8s-upgrade-and-conformance-h1eam6" namespace
STEP: Deleting cluster k8s-upgrade-and-conformance-h1eam6/k8s-upgrade-and-conformance-v1j3le
STEP: Deleting cluster k8s-upgrade-and-conformance-v1j3le
INFO: Waiting for the Cluster k8s-upgrade-and-conformance-h1eam6/k8s-upgrade-and-conformance-v1j3le to be deleted
STEP: Waiting for cluster k8s-upgrade-and-conformance-v1j3le to be deleted
STEP: Deleting namespace used for hosting the "k8s-upgrade-and-conformance" test spec
... skipping 115 lines ...
STEP: Deleting cluster clusterctl-upgrade/clusterctl-upgrade-evzcgw
STEP: Deleting namespace clusterctl-upgrade used for hosting the "clusterctl-upgrade" test
INFO: Deleting namespace clusterctl-upgrade
STEP: Deleting providers
INFO: clusterctl delete --all
STEP: Dumping logs from the "clusterctl-upgrade-evzcgw" workload cluster
Failed to get logs for Machine clusterctl-upgrade-evzcgw-control-plane-8pwrt, Cluster clusterctl-upgrade-mt2p11/clusterctl-upgrade-evzcgw: exit status 2
Failed to get logs for Machine clusterctl-upgrade-evzcgw-md-0-8ffd8dc78-spvdv, Cluster clusterctl-upgrade-mt2p11/clusterctl-upgrade-evzcgw: exit status 2
STEP: Dumping all the Cluster API resources in the "clusterctl-upgrade-mt2p11" namespace
STEP: Deleting cluster clusterctl-upgrade-mt2p11/clusterctl-upgrade-evzcgw
STEP: Deleting cluster clusterctl-upgrade-evzcgw
INFO: Waiting for the Cluster clusterctl-upgrade-mt2p11/clusterctl-upgrade-evzcgw to be deleted
STEP: Waiting for cluster clusterctl-upgrade-evzcgw to be deleted
STEP: Deleting namespace used for hosting the "clusterctl-upgrade" test spec
... skipping 4 lines ...
When testing clusterctl upgrades [clusterctl-Upgrade]
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/clusterctl_upgrade_test.go:26
  Should create a management cluster and then upgrade all the providers
  /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/clusterctl_upgrade.go:152
------------------------------
STEP: Dumping logs from the bootstrap cluster
Failed to get logs for the bootstrap cluster node test-bfaa6p-control-plane: exit status 2
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] When testing ClusterClass changes [ClusterClass] [It] Should successfully rollout the managed topology upon changes to the ClusterClass 
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/clusterclass_changes.go:240

Ran 10 of 21 Specs in 1781.805 seconds
FAIL! -- 9 Passed | 1 Failed | 0 Pending | 11 Skipped


Ginkgo ran 1 suite in 30m46.425023611s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
  - For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta
  - To comment, chime in at https://github.com/onsi/ginkgo/issues/711

To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc
make: *** [Makefile:129: run] Error 1
make: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e'
+ cleanup
++ pgrep -f 'docker events'
+ kill 25880
++ pgrep -f 'ctr -n moby events'
+ kill 25881
... skipping 21 lines ...