This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 12 succeeded
Started2022-08-16 14:46
Elapsed50m17s
Revisionmain

Test Failures


capi-e2e When testing clusterctl upgrades [clusterctl-Upgrade] Should create a management cluster and then upgrade all the providers 5m0s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capi\-e2e\sWhen\stesting\sclusterctl\supgrades\s\[clusterctl\-Upgrade\]\sShould\screate\sa\smanagement\scluster\sand\sthen\supgrade\sall\sthe\sproviders$'
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/clusterctl_upgrade.go:152
Timed out after 300.000s.
Timed out waiting for Cluster clusterctl-upgrade-a2y2ai/clusterctl-upgrade-d7r1ft to provision
Expected
    <string>: Provisioning
to equal
    <string>: Provisioned
/home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/cluster_helpers.go:144
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 12 Passed Tests

Show 8 Skipped Tests

Error lines from build-log.txt

... skipping 1125 lines ...
INFO: Waiting for the machine deployments to be provisioned
STEP: Waiting for the workload nodes to exist
STEP: Checking all the machines controlled by quick-start-se4554-md-0 are in the "fd4" failure domain
INFO: Waiting for the machine pools to be provisioned
STEP: PASSED!
STEP: Dumping logs from the "quick-start-se4554" workload cluster
Failed to get logs for Machine quick-start-se4554-control-plane-d8ct8, Cluster quick-start-fmrxoq/quick-start-se4554: exit status 2
Failed to get logs for Machine quick-start-se4554-md-0-957f9658d-g2j2v, Cluster quick-start-fmrxoq/quick-start-se4554: exit status 2
STEP: Dumping all the Cluster API resources in the "quick-start-fmrxoq" namespace
STEP: Deleting cluster quick-start-fmrxoq/quick-start-se4554
STEP: Deleting cluster quick-start-se4554
INFO: Waiting for the Cluster quick-start-fmrxoq/quick-start-se4554 to be deleted
STEP: Waiting for cluster quick-start-se4554 to be deleted
STEP: Deleting namespace used for hosting the "quick-start" test spec
... skipping 48 lines ...
Patching MachineHealthCheck unhealthy condition to one of the nodes
INFO: Patching the node condition to the node
Waiting for remediation
Waiting until the node with unhealthy node condition is remediated
STEP: PASSED!
STEP: Dumping logs from the "mhc-remediation-q9eddl" workload cluster
Failed to get logs for Machine mhc-remediation-q9eddl-control-plane-j9rrd, Cluster mhc-remediation-dz2eyb/mhc-remediation-q9eddl: exit status 2
Failed to get logs for Machine mhc-remediation-q9eddl-md-0-5db466854b-6w5x8, Cluster mhc-remediation-dz2eyb/mhc-remediation-q9eddl: exit status 2
STEP: Dumping all the Cluster API resources in the "mhc-remediation-dz2eyb" namespace
STEP: Deleting cluster mhc-remediation-dz2eyb/mhc-remediation-q9eddl
STEP: Deleting cluster mhc-remediation-q9eddl
INFO: Waiting for the Cluster mhc-remediation-dz2eyb/mhc-remediation-q9eddl to be deleted
STEP: Waiting for cluster mhc-remediation-q9eddl to be deleted
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
... skipping 48 lines ...
INFO: Waiting for rolling upgrade to start.
INFO: Waiting for MachineDeployment rolling upgrade to start
INFO: Waiting for rolling upgrade to complete.
INFO: Waiting for MachineDeployment rolling upgrade to complete
STEP: PASSED!
STEP: Dumping logs from the "md-rollout-xq3ccz" workload cluster
Failed to get logs for Machine md-rollout-xq3ccz-control-plane-xx6gx, Cluster md-rollout-g6zhnq/md-rollout-xq3ccz: exit status 2
Failed to get logs for Machine md-rollout-xq3ccz-md-0-7456cc447f-fqrpz, Cluster md-rollout-g6zhnq/md-rollout-xq3ccz: exit status 2
STEP: Dumping all the Cluster API resources in the "md-rollout-g6zhnq" namespace
STEP: Deleting cluster md-rollout-g6zhnq/md-rollout-xq3ccz
STEP: Deleting cluster md-rollout-xq3ccz
INFO: Waiting for the Cluster md-rollout-g6zhnq/md-rollout-xq3ccz to be deleted
STEP: Waiting for cluster md-rollout-xq3ccz to be deleted
STEP: Deleting namespace used for hosting the "md-rollout" test spec
... skipping 42 lines ...
INFO: Waiting for the machine deployments to be provisioned
STEP: Waiting for the workload nodes to exist
STEP: Checking all the machines controlled by quick-start-fxzu39-md-0 are in the "fd4" failure domain
INFO: Waiting for the machine pools to be provisioned
STEP: PASSED!
STEP: Dumping logs from the "quick-start-fxzu39" workload cluster
Failed to get logs for Machine quick-start-fxzu39-control-plane-qr26r, Cluster quick-start-gmoa57/quick-start-fxzu39: exit status 2
Failed to get logs for Machine quick-start-fxzu39-md-0-7b9c88d845-hrstw, Cluster quick-start-gmoa57/quick-start-fxzu39: exit status 2
STEP: Dumping all the Cluster API resources in the "quick-start-gmoa57" namespace
STEP: Deleting cluster quick-start-gmoa57/quick-start-fxzu39
STEP: Deleting cluster quick-start-fxzu39
INFO: Waiting for the Cluster quick-start-gmoa57/quick-start-fxzu39 to be deleted
STEP: Waiting for cluster quick-start-fxzu39 to be deleted
STEP: Deleting namespace used for hosting the "quick-start" test spec
... skipping 81 lines ...
STEP: Ensure API servers are stable before doing move
STEP: Moving the cluster back to bootstrap
STEP: Moving workload clusters
INFO: Waiting for the cluster to be reconciled after moving back to bootstrap
STEP: Waiting for cluster to enter the provisioned phase
STEP: Dumping logs from the "self-hosted-8gunyp" workload cluster
Failed to get logs for Machine self-hosted-8gunyp-control-plane-4hpfh, Cluster self-hosted-nzebwo/self-hosted-8gunyp: exit status 2
Failed to get logs for Machine self-hosted-8gunyp-md-0-86b847d6df-qwl6n, Cluster self-hosted-nzebwo/self-hosted-8gunyp: exit status 2
STEP: Dumping all the Cluster API resources in the "self-hosted-nzebwo" namespace
STEP: Deleting cluster self-hosted-nzebwo/self-hosted-8gunyp
STEP: Deleting cluster self-hosted-8gunyp
INFO: Waiting for the Cluster self-hosted-nzebwo/self-hosted-8gunyp to be deleted
STEP: Waiting for cluster self-hosted-8gunyp to be deleted
STEP: Deleting namespace used for hosting the "self-hosted" test spec
... skipping 50 lines ...
Patching MachineHealthCheck unhealthy condition to one of the nodes
INFO: Patching the node condition to the node
Waiting for remediation
Waiting until the node with unhealthy node condition is remediated
STEP: PASSED!
STEP: Dumping logs from the "mhc-remediation-j80aa8" workload cluster
Failed to get logs for Machine mhc-remediation-j80aa8-control-plane-7lf49, Cluster mhc-remediation-tdklyp/mhc-remediation-j80aa8: exit status 2
Failed to get logs for Machine mhc-remediation-j80aa8-control-plane-9mls2, Cluster mhc-remediation-tdklyp/mhc-remediation-j80aa8: exit status 2
Failed to get logs for Machine mhc-remediation-j80aa8-control-plane-tw7v7, Cluster mhc-remediation-tdklyp/mhc-remediation-j80aa8: exit status 2
Failed to get logs for Machine mhc-remediation-j80aa8-md-0-7f7457df46-7sd4c, Cluster mhc-remediation-tdklyp/mhc-remediation-j80aa8: exit status 2
STEP: Dumping all the Cluster API resources in the "mhc-remediation-tdklyp" namespace
STEP: Deleting cluster mhc-remediation-tdklyp/mhc-remediation-j80aa8
STEP: Deleting cluster mhc-remediation-j80aa8
INFO: Waiting for the Cluster mhc-remediation-tdklyp/mhc-remediation-j80aa8 to be deleted
STEP: Waiting for cluster mhc-remediation-j80aa8 to be deleted
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
... skipping 48 lines ...
STEP: Waiting for the machine pool workload nodes
STEP: Scaling the machine pool to zero
INFO: Patching the replica count in Machine Pool machine-pool-bw5kb3/machine-pool-xdgyzm-mp-0
STEP: Waiting for the machine pool workload nodes
STEP: PASSED!
STEP: Dumping logs from the "machine-pool-xdgyzm" workload cluster
Failed to get logs for Machine machine-pool-xdgyzm-control-plane-6xvwb, Cluster machine-pool-bw5kb3/machine-pool-xdgyzm: exit status 2
STEP: Dumping all the Cluster API resources in the "machine-pool-bw5kb3" namespace
STEP: Deleting cluster machine-pool-bw5kb3/machine-pool-xdgyzm
STEP: Deleting cluster machine-pool-xdgyzm
INFO: Waiting for the Cluster machine-pool-bw5kb3/machine-pool-xdgyzm to be deleted
STEP: Waiting for cluster machine-pool-xdgyzm to be deleted
STEP: Deleting namespace used for hosting the "machine-pool" test spec
... skipping 52 lines ...
STEP: Waiting for deployment node-drain-sdoc1h-unevictable-workload/unevictable-pod-vyc to be available
STEP: Scale down the controlplane of the workload cluster and make sure that nodes running workload can be deleted even the draining process is blocked.
INFO: Scaling controlplane node-drain-sdoc1h/node-drain-ho3vlx-control-plane from 3 to 1 replicas
INFO: Waiting for correct number of replicas to exist
STEP: PASSED!
STEP: Dumping logs from the "node-drain-ho3vlx" workload cluster
Failed to get logs for Machine node-drain-ho3vlx-control-plane-fr6zl, Cluster node-drain-sdoc1h/node-drain-ho3vlx: exit status 2
STEP: Dumping all the Cluster API resources in the "node-drain-sdoc1h" namespace
STEP: Deleting cluster node-drain-sdoc1h/node-drain-ho3vlx
STEP: Deleting cluster node-drain-ho3vlx
INFO: Waiting for the Cluster node-drain-sdoc1h/node-drain-ho3vlx to be deleted
STEP: Waiting for cluster node-drain-ho3vlx to be deleted
STEP: Deleting namespace used for hosting the "node-drain" test spec
... skipping 54 lines ...
INFO: Waiting for etcd to have the upgraded image tag
INFO: Waiting for Kubernetes versions of machines in MachineDeployment k8s-upgrade-and-conformance-9p6qkr/k8s-upgrade-and-conformance-6jete4-md-0-rxfgd to be upgraded to v1.24.0
INFO: Ensuring all MachineDeployment Machines have upgraded kubernetes version v1.24.0
STEP: Waiting until nodes are ready
STEP: PASSED!
STEP: Dumping logs from the "k8s-upgrade-and-conformance-6jete4" workload cluster
Failed to get logs for Machine k8s-upgrade-and-conformance-6jete4-hrr8h-5tmq9, Cluster k8s-upgrade-and-conformance-9p6qkr/k8s-upgrade-and-conformance-6jete4: exit status 2
Failed to get logs for Machine k8s-upgrade-and-conformance-6jete4-hrr8h-ldqhn, Cluster k8s-upgrade-and-conformance-9p6qkr/k8s-upgrade-and-conformance-6jete4: exit status 2
Failed to get logs for Machine k8s-upgrade-and-conformance-6jete4-hrr8h-lpl85, Cluster k8s-upgrade-and-conformance-9p6qkr/k8s-upgrade-and-conformance-6jete4: exit status 2
Failed to get logs for Machine k8s-upgrade-and-conformance-6jete4-md-0-rxfgd-dfd68dd4d-r9js8, Cluster k8s-upgrade-and-conformance-9p6qkr/k8s-upgrade-and-conformance-6jete4: exit status 2
STEP: Dumping all the Cluster API resources in the "k8s-upgrade-and-conformance-9p6qkr" namespace
STEP: Deleting cluster k8s-upgrade-and-conformance-9p6qkr/k8s-upgrade-and-conformance-6jete4
STEP: Deleting cluster k8s-upgrade-and-conformance-6jete4
INFO: Waiting for the Cluster k8s-upgrade-and-conformance-9p6qkr/k8s-upgrade-and-conformance-6jete4 to be deleted
STEP: Waiting for cluster k8s-upgrade-and-conformance-6jete4 to be deleted
STEP: Deleting namespace used for hosting the "k8s-upgrade-and-conformance" test spec
... skipping 52 lines ...
INFO: Waiting for etcd to have the upgraded image tag
INFO: Waiting for Kubernetes versions of machines in MachineDeployment k8s-upgrade-and-conformance-wci9ae/k8s-upgrade-and-conformance-c9uxqt-md-0-cbt66 to be upgraded to v1.24.0
INFO: Ensuring all MachineDeployment Machines have upgraded kubernetes version v1.24.0
STEP: Waiting until nodes are ready
STEP: PASSED!
STEP: Dumping logs from the "k8s-upgrade-and-conformance-c9uxqt" workload cluster
Failed to get logs for Machine k8s-upgrade-and-conformance-c9uxqt-fcq55-wsc6q, Cluster k8s-upgrade-and-conformance-wci9ae/k8s-upgrade-and-conformance-c9uxqt: exit status 2
Failed to get logs for Machine k8s-upgrade-and-conformance-c9uxqt-md-0-cbt66-684fb8b84c-4rqrd, Cluster k8s-upgrade-and-conformance-wci9ae/k8s-upgrade-and-conformance-c9uxqt: exit status 2
Failed to get logs for Machine k8s-upgrade-and-conformance-c9uxqt-md-0-cbt66-684fb8b84c-sxg7m, Cluster k8s-upgrade-and-conformance-wci9ae/k8s-upgrade-and-conformance-c9uxqt: exit status 2
STEP: Dumping all the Cluster API resources in the "k8s-upgrade-and-conformance-wci9ae" namespace
STEP: Deleting cluster k8s-upgrade-and-conformance-wci9ae/k8s-upgrade-and-conformance-c9uxqt
STEP: Deleting cluster k8s-upgrade-and-conformance-c9uxqt
INFO: Waiting for the Cluster k8s-upgrade-and-conformance-wci9ae/k8s-upgrade-and-conformance-c9uxqt to be deleted
STEP: Waiting for cluster k8s-upgrade-and-conformance-c9uxqt to be deleted
STEP: Deleting namespace used for hosting the "k8s-upgrade-and-conformance" test spec
... skipping 127 lines ...
INFO: Waiting for correct number of replicas to exist
STEP: Scaling the MachineDeployment down to 1
INFO: Scaling machine deployment md-scale-2swi2k/md-scale-l8ea2z-md-0 from 3 to 1 replicas
INFO: Waiting for correct number of replicas to exist
STEP: PASSED!
STEP: Dumping logs from the "md-scale-l8ea2z" workload cluster
Failed to get logs for Machine md-scale-l8ea2z-control-plane-ccv9x, Cluster md-scale-2swi2k/md-scale-l8ea2z: exit status 2
Failed to get logs for Machine md-scale-l8ea2z-md-0-67f8894dc8-7xtdt, Cluster md-scale-2swi2k/md-scale-l8ea2z: exit status 2
STEP: Dumping all the Cluster API resources in the "md-scale-2swi2k" namespace
STEP: Deleting cluster md-scale-2swi2k/md-scale-l8ea2z
STEP: Deleting cluster md-scale-l8ea2z
INFO: Waiting for the Cluster md-scale-2swi2k/md-scale-l8ea2z to be deleted
STEP: Waiting for cluster md-scale-l8ea2z to be deleted
STEP: Deleting namespace used for hosting the "md-scale" test spec
... skipping 54 lines ...
INFO: Waiting for etcd to have the upgraded image tag
INFO: Waiting for Kubernetes versions of machines in MachineDeployment k8s-upgrade-and-conformance-kni4ww/k8s-upgrade-and-conformance-g4zidg-md-0-vrjpx to be upgraded to v1.24.0
INFO: Ensuring all MachineDeployment Machines have upgraded kubernetes version v1.24.0
STEP: Waiting until nodes are ready
STEP: PASSED!
STEP: Dumping logs from the "k8s-upgrade-and-conformance-g4zidg" workload cluster
Failed to get logs for Machine k8s-upgrade-and-conformance-g4zidg-md-0-vrjpx-64c55b7679-kjlzq, Cluster k8s-upgrade-and-conformance-kni4ww/k8s-upgrade-and-conformance-g4zidg: exit status 2
Failed to get logs for Machine k8s-upgrade-and-conformance-g4zidg-q4gzd-2gnfd, Cluster k8s-upgrade-and-conformance-kni4ww/k8s-upgrade-and-conformance-g4zidg: exit status 2
Failed to get logs for Machine k8s-upgrade-and-conformance-g4zidg-q4gzd-74kws, Cluster k8s-upgrade-and-conformance-kni4ww/k8s-upgrade-and-conformance-g4zidg: exit status 2
Failed to get logs for Machine k8s-upgrade-and-conformance-g4zidg-q4gzd-nstlv, Cluster k8s-upgrade-and-conformance-kni4ww/k8s-upgrade-and-conformance-g4zidg: exit status 2
STEP: Dumping all the Cluster API resources in the "k8s-upgrade-and-conformance-kni4ww" namespace
STEP: Deleting cluster k8s-upgrade-and-conformance-kni4ww/k8s-upgrade-and-conformance-g4zidg
STEP: Deleting cluster k8s-upgrade-and-conformance-g4zidg
INFO: Waiting for the Cluster k8s-upgrade-and-conformance-kni4ww/k8s-upgrade-and-conformance-g4zidg to be deleted
STEP: Waiting for cluster k8s-upgrade-and-conformance-g4zidg to be deleted
STEP: Deleting namespace used for hosting the "k8s-upgrade-and-conformance" test spec
... skipping 4 lines ...
When upgrading a workload cluster using ClusterClass with a HA control plane [ClusterClass]
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade_test.go:83
  Should create and upgrade a workload cluster and eventually run kubetest
  /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:118
------------------------------
STEP: Dumping logs from the bootstrap cluster
Failed to get logs for the bootstrap cluster node test-9uyc4x-control-plane: exit status 2
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] When testing clusterctl upgrades [clusterctl-Upgrade] [It] Should create a management cluster and then upgrade all the providers 
/home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/cluster_helpers.go:144

Ran 13 of 21 Specs in 2619.612 seconds
FAIL! -- 12 Passed | 1 Failed | 0 Pending | 8 Skipped


Ginkgo ran 1 suite in 44m46.656523892s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
  - For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta
  - To comment, chime in at https://github.com/onsi/ginkgo/issues/711

To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc
make: *** [Makefile:129: run] Error 1
make: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e'
+ cleanup
++ pgrep -f 'docker events'
+ kill 25853
++ pgrep -f 'ctr -n moby events'
+ kill 25854
... skipping 22 lines ...