This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 10 succeeded
Started2022-08-17 11:16
Elapsed32m53s
Revisionmain

Test Failures


capi-e2e When testing ClusterClass changes [ClusterClass] Should successfully rollout the managed topology upon changes to the ClusterClass 2m28s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capi\-e2e\sWhen\stesting\sClusterClass\schanges\s\[ClusterClass\]\sShould\ssuccessfully\srollout\sthe\smanaged\stopology\supon\schanges\sto\sthe\sClusterClass$'
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/clusterclass_changes.go:119
Expected success, but got an error:
    <errors.aggregate | len:1, cap:1>: [
        <*errors.StatusError | 0xc001fb2640>{
            ErrStatus: {
                TypeMeta: {Kind: "", APIVersion: ""},
                ListMeta: {
                    SelfLink: "",
                    ResourceVersion: "",
                    Continue: "",
                    RemainingItemCount: nil,
                },
                Status: "Failure",
                Message: "admission webhook \"default.cluster.cluster.x-k8s.io\" denied the request: Internal error occurred: Cluster clusterclass-changes-hhx1gy can't be validated. ClusterClass quick-start-hgdg3m can not be retrieved: ClusterClass.cluster.x-k8s.io \"quick-start-hgdg3m\" not found",
                Reason: "InternalError",
                Details: {
                    Name: "",
                    Group: "",
                    Kind: "",
                    UID: "",
                    Causes: [
                        {
                            Type: "",
                            Message: "Cluster clusterclass-changes-hhx1gy can't be validated. ClusterClass quick-start-hgdg3m can not be retrieved: ClusterClass.cluster.x-k8s.io \"quick-start-hgdg3m\" not found",
                            Field: "",
                        },
                    ],
                    RetryAfterSeconds: 0,
                },
                Code: 500,
            },
        },
    ]
    admission webhook "default.cluster.cluster.x-k8s.io" denied the request: Internal error occurred: Cluster clusterclass-changes-hhx1gy can't be validated. ClusterClass quick-start-hgdg3m can not be retrieved: ClusterClass.cluster.x-k8s.io "quick-start-hgdg3m" not found
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/clusterclass_changes.go:378
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 10 Passed Tests

Show 10 Skipped Tests

Error lines from build-log.txt

... skipping 1116 lines ...

STEP: Waiting for the control plane to be ready
STEP: Taking stable ownership of the Machines
STEP: Taking ownership of the cluster's PKI material
STEP: PASSED!
STEP: Dumping logs from the "kcp-adoption-jhrod8" workload cluster
Failed to get logs for Machine kcp-adoption-jhrod8-control-plane-0, Cluster kcp-adoption-era1jp/kcp-adoption-jhrod8: exit status 2
STEP: Dumping all the Cluster API resources in the "kcp-adoption-era1jp" namespace
STEP: Deleting cluster kcp-adoption-era1jp/kcp-adoption-jhrod8
STEP: Deleting cluster kcp-adoption-jhrod8
INFO: Waiting for the Cluster kcp-adoption-era1jp/kcp-adoption-jhrod8 to be deleted
STEP: Waiting for cluster kcp-adoption-jhrod8 to be deleted
STEP: Deleting namespace used for hosting the "kcp-adoption" test spec
... skipping 40 lines ...
INFO: Waiting for the machine deployments to be provisioned
STEP: Waiting for the workload nodes to exist
STEP: Checking all the machines controlled by quick-start-bgejs9-md-0-q2pqn are in the "fd4" failure domain
INFO: Waiting for the machine pools to be provisioned
STEP: PASSED!
STEP: Dumping logs from the "quick-start-bgejs9" workload cluster
Failed to get logs for Machine quick-start-bgejs9-98rv4-hz9xz, Cluster quick-start-x6cmqu/quick-start-bgejs9: exit status 2
Failed to get logs for Machine quick-start-bgejs9-md-0-q2pqn-84c98d7bd7-99cdx, Cluster quick-start-x6cmqu/quick-start-bgejs9: exit status 2
STEP: Dumping all the Cluster API resources in the "quick-start-x6cmqu" namespace
STEP: Deleting cluster quick-start-x6cmqu/quick-start-bgejs9
STEP: Deleting cluster quick-start-bgejs9
INFO: Waiting for the Cluster quick-start-x6cmqu/quick-start-bgejs9 to be deleted
STEP: Waiting for cluster quick-start-bgejs9 to be deleted
STEP: Deleting namespace used for hosting the "quick-start" test spec
... skipping 48 lines ...
STEP: Waiting for the machine pool workload nodes
STEP: Scaling the machine pool to zero
INFO: Patching the replica count in Machine Pool machine-pool-zshzsz/machine-pool-17cwru-mp-0
STEP: Waiting for the machine pool workload nodes
STEP: PASSED!
STEP: Dumping logs from the "machine-pool-17cwru" workload cluster
Failed to get logs for Machine machine-pool-17cwru-control-plane-z9h5b, Cluster machine-pool-zshzsz/machine-pool-17cwru: exit status 2
STEP: Dumping all the Cluster API resources in the "machine-pool-zshzsz" namespace
STEP: Deleting cluster machine-pool-zshzsz/machine-pool-17cwru
STEP: Deleting cluster machine-pool-17cwru
INFO: Waiting for the Cluster machine-pool-zshzsz/machine-pool-17cwru to be deleted
STEP: Waiting for cluster machine-pool-17cwru to be deleted
STEP: Deleting namespace used for hosting the "machine-pool" test spec
... skipping 48 lines ...
Patching MachineHealthCheck unhealthy condition to one of the nodes
INFO: Patching the node condition to the node
Waiting for remediation
Waiting until the node with unhealthy node condition is remediated
STEP: PASSED!
STEP: Dumping logs from the "mhc-remediation-9n3yrt" workload cluster
Failed to get logs for Machine mhc-remediation-9n3yrt-control-plane-nwh8w, Cluster mhc-remediation-tuj9kk/mhc-remediation-9n3yrt: exit status 2
Failed to get logs for Machine mhc-remediation-9n3yrt-md-0-775df9d4f6-lrpbb, Cluster mhc-remediation-tuj9kk/mhc-remediation-9n3yrt: exit status 2
STEP: Dumping all the Cluster API resources in the "mhc-remediation-tuj9kk" namespace
STEP: Deleting cluster mhc-remediation-tuj9kk/mhc-remediation-9n3yrt
STEP: Deleting cluster mhc-remediation-9n3yrt
INFO: Waiting for the Cluster mhc-remediation-tuj9kk/mhc-remediation-9n3yrt to be deleted
STEP: Waiting for cluster mhc-remediation-9n3yrt to be deleted
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
... skipping 48 lines ...
INFO: Waiting for rolling upgrade to start.
INFO: Waiting for MachineDeployment rolling upgrade to start
INFO: Waiting for rolling upgrade to complete.
INFO: Waiting for MachineDeployment rolling upgrade to complete
STEP: PASSED!
STEP: Dumping logs from the "md-rollout-o2hnid" workload cluster
Failed to get logs for Machine md-rollout-o2hnid-control-plane-2968l, Cluster md-rollout-uji5fh/md-rollout-o2hnid: exit status 2
Failed to get logs for Machine md-rollout-o2hnid-md-0-5494bc6947-p2kcz, Cluster md-rollout-uji5fh/md-rollout-o2hnid: exit status 2
STEP: Dumping all the Cluster API resources in the "md-rollout-uji5fh" namespace
STEP: Deleting cluster md-rollout-uji5fh/md-rollout-o2hnid
STEP: Deleting cluster md-rollout-o2hnid
INFO: Waiting for the Cluster md-rollout-uji5fh/md-rollout-o2hnid to be deleted
STEP: Waiting for cluster md-rollout-o2hnid to be deleted
STEP: Deleting namespace used for hosting the "md-rollout" test spec
... skipping 81 lines ...
STEP: Ensure API servers are stable before doing move
STEP: Moving the cluster back to bootstrap
STEP: Moving workload clusters
INFO: Waiting for the cluster to be reconciled after moving back to bootstrap
STEP: Waiting for cluster to enter the provisioned phase
STEP: Dumping logs from the "self-hosted-j74ao5" workload cluster
Failed to get logs for Machine self-hosted-j74ao5-f6zsc-5c8zr, Cluster self-hosted-u13iks/self-hosted-j74ao5: exit status 2
Failed to get logs for Machine self-hosted-j74ao5-md-0-vj4ql-6fd7cc4f67-4qbjv, Cluster self-hosted-u13iks/self-hosted-j74ao5: exit status 2
STEP: Dumping all the Cluster API resources in the "self-hosted-u13iks" namespace
STEP: Deleting cluster self-hosted-u13iks/self-hosted-j74ao5
STEP: Deleting cluster self-hosted-j74ao5
INFO: Waiting for the Cluster self-hosted-u13iks/self-hosted-j74ao5 to be deleted
STEP: Waiting for cluster self-hosted-j74ao5 to be deleted
STEP: Deleting namespace used for hosting the "self-hosted" test spec
... skipping 40 lines ...
INFO: Waiting for the machine deployments to be provisioned
STEP: Waiting for the workload nodes to exist
STEP: Checking all the machines controlled by quick-start-xmx9ug-md-0 are in the "fd4" failure domain
INFO: Waiting for the machine pools to be provisioned
STEP: PASSED!
STEP: Dumping logs from the "quick-start-xmx9ug" workload cluster
Failed to get logs for Machine quick-start-xmx9ug-control-plane-wtfbc, Cluster quick-start-di5qkx/quick-start-xmx9ug: exit status 2
Failed to get logs for Machine quick-start-xmx9ug-md-0-7fdf59b4c8-np7cj, Cluster quick-start-di5qkx/quick-start-xmx9ug: exit status 2
STEP: Dumping all the Cluster API resources in the "quick-start-di5qkx" namespace
STEP: Deleting cluster quick-start-di5qkx/quick-start-xmx9ug
STEP: Deleting cluster quick-start-xmx9ug
INFO: Waiting for the Cluster quick-start-di5qkx/quick-start-xmx9ug to be deleted
STEP: Waiting for cluster quick-start-xmx9ug to be deleted
STEP: Deleting namespace used for hosting the "quick-start" test spec
... skipping 50 lines ...
Patching MachineHealthCheck unhealthy condition to one of the nodes
INFO: Patching the node condition to the node
Waiting for remediation
Waiting until the node with unhealthy node condition is remediated
STEP: PASSED!
STEP: Dumping logs from the "mhc-remediation-y6z5z5" workload cluster
Failed to get logs for Machine mhc-remediation-y6z5z5-control-plane-q999r, Cluster mhc-remediation-ni0ior/mhc-remediation-y6z5z5: exit status 2
Failed to get logs for Machine mhc-remediation-y6z5z5-control-plane-vt2bp, Cluster mhc-remediation-ni0ior/mhc-remediation-y6z5z5: exit status 2
Failed to get logs for Machine mhc-remediation-y6z5z5-control-plane-z8qvx, Cluster mhc-remediation-ni0ior/mhc-remediation-y6z5z5: exit status 2
Failed to get logs for Machine mhc-remediation-y6z5z5-md-0-548bb6ddd9-znbwt, Cluster mhc-remediation-ni0ior/mhc-remediation-y6z5z5: exit status 2
STEP: Dumping all the Cluster API resources in the "mhc-remediation-ni0ior" namespace
STEP: Deleting cluster mhc-remediation-ni0ior/mhc-remediation-y6z5z5
STEP: Deleting cluster mhc-remediation-y6z5z5
INFO: Waiting for the Cluster mhc-remediation-ni0ior/mhc-remediation-y6z5z5 to be deleted
STEP: Waiting for cluster mhc-remediation-y6z5z5 to be deleted
STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
... skipping 47 lines ...
STEP: Modifying the MachineDeployment configuration in ClusterClass and wait for changes to be applied to the MachineDeployment objects
INFO: Modifying the BootstrapConfigTemplate of MachineDeploymentClass "default-worker" of ClusterClass clusterclass-changes-sgibhy/quick-start
INFO: Waiting for MachineDeployment rollout for MachineDeploymentClass "default-worker" to complete.
INFO: Waiting for MachineDeployment rollout for MachineDeploymentTopology "md-0" (class "default-worker") to complete.
STEP: Rebasing the Cluster to a ClusterClass with a modified label for MachineDeployments and wait for changes to be applied to the MachineDeployment objects
STEP: Dumping logs from the "clusterclass-changes-hhx1gy" workload cluster
Failed to get logs for Machine clusterclass-changes-hhx1gy-hmhm6-r7jwp, Cluster clusterclass-changes-sgibhy/clusterclass-changes-hhx1gy: exit status 2
Failed to get logs for Machine clusterclass-changes-hhx1gy-md-0-f2nd9-55f985c98d-vv6js, Cluster clusterclass-changes-sgibhy/clusterclass-changes-hhx1gy: exited with status: 2, &{%!s(*os.file=&{{{0 0 0} 16 {0} <nil> 0 1 true true true} /tmp/clusterclass-changes-hhx1gy-md-0-f2nd9-55f985c98d-vv6js4200099882 <nil> false false false})}
Failed to get logs for Machine clusterclass-changes-hhx1gy-md-0-f2nd9-77957d677c-vc96m, Cluster clusterclass-changes-sgibhy/clusterclass-changes-hhx1gy: exit status 2
STEP: Dumping all the Cluster API resources in the "clusterclass-changes-sgibhy" namespace
STEP: Deleting cluster clusterclass-changes-sgibhy/clusterclass-changes-hhx1gy
STEP: Deleting cluster clusterclass-changes-hhx1gy
INFO: Waiting for the Cluster clusterclass-changes-sgibhy/clusterclass-changes-hhx1gy to be deleted
STEP: Waiting for cluster clusterclass-changes-hhx1gy to be deleted
STEP: Deleting namespace used for hosting the "clusterclass-changes" test spec
... skipping 3 lines ...
• Failure [148.360 seconds]
When testing ClusterClass changes [ClusterClass]
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/clusterclass_changes_test.go:26
  Should successfully rollout the managed topology upon changes to the ClusterClass [It]
  /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/clusterclass_changes.go:119

  Expected success, but got an error:
      <errors.aggregate | len:1, cap:1>: [
          <*errors.StatusError | 0xc001fb2640>{
              ErrStatus: {
                  TypeMeta: {Kind: "", APIVersion: ""},
                  ListMeta: {
                      SelfLink: "",
                      ResourceVersion: "",
                      Continue: "",
                      RemainingItemCount: nil,
                  },
                  Status: "Failure",
                  Message: "admission webhook \"default.cluster.cluster.x-k8s.io\" denied the request: Internal error occurred: Cluster clusterclass-changes-hhx1gy can't be validated. ClusterClass quick-start-hgdg3m can not be retrieved: ClusterClass.cluster.x-k8s.io \"quick-start-hgdg3m\" not found",
                  Reason: "InternalError",
                  Details: {
                      Name: "",
                      Group: "",
                      Kind: "",
                      UID: "",
... skipping 7 lines ...
                      RetryAfterSeconds: 0,
                  },
                  Code: 500,
              },
          },
      ]
      admission webhook "default.cluster.cluster.x-k8s.io" denied the request: Internal error occurred: Cluster clusterclass-changes-hhx1gy can't be validated. ClusterClass quick-start-hgdg3m can not be retrieved: ClusterClass.cluster.x-k8s.io "quick-start-hgdg3m" not found

  /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/clusterclass_changes.go:378

  Full Stack Trace
  sigs.k8s.io/cluster-api/test/e2e.rebaseClusterClassAndWait({0x2528a78?, 0xc00066ea00}, {{0x25344a8, 0xc0007be000}, 0xc000a47d40, 0xc000936540, {0xc00014a5a0, 0x2, 0x2}})
  	/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/clusterclass_changes.go:378 +0x62d
... skipping 70 lines ...
INFO: Waiting for correct number of replicas to exist
STEP: Scaling the MachineDeployment down to 1
INFO: Scaling machine deployment md-scale-744c8m/md-scale-mp5lqq-md-0 from 3 to 1 replicas
INFO: Waiting for correct number of replicas to exist
STEP: PASSED!
STEP: Dumping logs from the "md-scale-mp5lqq" workload cluster
Failed to get logs for Machine md-scale-mp5lqq-control-plane-7pmrp, Cluster md-scale-744c8m/md-scale-mp5lqq: exit status 2
Failed to get logs for Machine md-scale-mp5lqq-md-0-5597894bbb-z96hc, Cluster md-scale-744c8m/md-scale-mp5lqq: exit status 2
STEP: Dumping all the Cluster API resources in the "md-scale-744c8m" namespace
STEP: Deleting cluster md-scale-744c8m/md-scale-mp5lqq
STEP: Deleting cluster md-scale-mp5lqq
INFO: Waiting for the Cluster md-scale-744c8m/md-scale-mp5lqq to be deleted
STEP: Waiting for cluster md-scale-mp5lqq to be deleted
STEP: Deleting namespace used for hosting the "md-scale" test spec
... skipping 54 lines ...
INFO: Waiting for etcd to have the upgraded image tag
INFO: Waiting for Kubernetes versions of machines in MachineDeployment k8s-upgrade-and-conformance-bhn4qy/k8s-upgrade-and-conformance-ah18e4-md-0-pndmt to be upgraded to v1.24.0
INFO: Ensuring all MachineDeployment Machines have upgraded kubernetes version v1.24.0
STEP: Waiting until nodes are ready
STEP: PASSED!
STEP: Dumping logs from the "k8s-upgrade-and-conformance-ah18e4" workload cluster
Failed to get logs for Machine k8s-upgrade-and-conformance-ah18e4-2bwcz-6hlqg, Cluster k8s-upgrade-and-conformance-bhn4qy/k8s-upgrade-and-conformance-ah18e4: exit status 2
Failed to get logs for Machine k8s-upgrade-and-conformance-ah18e4-2bwcz-l9x75, Cluster k8s-upgrade-and-conformance-bhn4qy/k8s-upgrade-and-conformance-ah18e4: exit status 2
Failed to get logs for Machine k8s-upgrade-and-conformance-ah18e4-2bwcz-t464z, Cluster k8s-upgrade-and-conformance-bhn4qy/k8s-upgrade-and-conformance-ah18e4: exit status 2
Failed to get logs for Machine k8s-upgrade-and-conformance-ah18e4-md-0-pndmt-7f88cd5c8f-fb5zl, Cluster k8s-upgrade-and-conformance-bhn4qy/k8s-upgrade-and-conformance-ah18e4: exit status 2
STEP: Dumping all the Cluster API resources in the "k8s-upgrade-and-conformance-bhn4qy" namespace
STEP: Deleting cluster k8s-upgrade-and-conformance-bhn4qy/k8s-upgrade-and-conformance-ah18e4
STEP: Deleting cluster k8s-upgrade-and-conformance-ah18e4
INFO: Waiting for the Cluster k8s-upgrade-and-conformance-bhn4qy/k8s-upgrade-and-conformance-ah18e4 to be deleted
STEP: Waiting for cluster k8s-upgrade-and-conformance-ah18e4 to be deleted
STEP: Deleting namespace used for hosting the "k8s-upgrade-and-conformance" test spec
... skipping 4 lines ...
When upgrading a workload cluster using ClusterClass with a HA control plane using scale-in rollout [ClusterClass]
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade_test.go:101
  Should create and upgrade a workload cluster and eventually run kubetest
  /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:118
------------------------------
STEP: Dumping logs from the bootstrap cluster
Failed to get logs for the bootstrap cluster node test-5bfy58-control-plane: exit status 2
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] When testing ClusterClass changes [ClusterClass] [It] Should successfully rollout the managed topology upon changes to the ClusterClass 
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/clusterclass_changes.go:378

Ran 11 of 21 Specs in 1596.531 seconds
FAIL! -- 10 Passed | 1 Failed | 0 Pending | 10 Skipped


Ginkgo ran 1 suite in 27m38.756385894s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
  - For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta
  - To comment, chime in at https://github.com/onsi/ginkgo/issues/711

To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc
make: *** [Makefile:129: run] Error 1
make: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e'
+ cleanup
++ pgrep -f 'docker events'
+ kill 25871
++ pgrep -f 'ctr -n moby events'
+ kill 25872
... skipping 21 lines ...