This job view page is being replaced by Spyglass soon. Check out the new job view.
PRsedefsavas: v1beta2 APIs
ResultFAILURE
Tests 1 failed / 25 succeeded
Started2022-09-12 17:56
Elapsed2h0m
Revisionf1a8c221b7cad15d46183620dc06a70f4ade31ab
Refs 3720

Test Failures


capa-e2e [unmanaged] [Cluster API Framework] Clusterctl Upgrade Spec [from v1alpha4] Should create a management cluster and then upgrade all the providers 16m14s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capa\-e2e\s\[unmanaged\]\s\[Cluster\sAPI\sFramework\]\sClusterctl\sUpgrade\sSpec\s\[from\sv1alpha4\]\sShould\screate\sa\smanagement\scluster\sand\sthen\supgrade\sall\sthe\sproviders$'
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.0/e2e/clusterctl_upgrade.go:147
failed to run clusterctl upgrade
Unexpected error:
    <*errors.fundamental | 0xc000d58d80>: {
        msg: "unable to upgrade CRD \"awsclustercontrolleridentities.infrastructure.cluster.x-k8s.io\" because the new CRD does not contain the storage version \"v1alpha4\" of the current CRD, thus not allowing CR migration",
        stack: [0x203ef95, 0x203ea29, 0x2061954, 0x205f765, 0x2092579, 0x20966d8, 0x209972d, 0x210a7d7, 0x948d71, 0x948765, 0x947e5b, 0x94db4a, 0x94d547, 0x9599e8, 0x959705, 0x958da5, 0x95b172, 0x967569, 0x967376, 0x216a53b, 0x520862, 0x46d9a1],
    }
    unable to upgrade CRD "awsclustercontrolleridentities.infrastructure.cluster.x-k8s.io" because the new CRD does not contain the storage version "v1alpha4" of the current CRD, thus not allowing CR migration
occurred
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.0/framework/clusterctl/client.go:142
				
				Click to see stdout/stderrfrom junit.e2e_suite.7.xml

Filter through log files | View test history on testgrid


Show 25 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 2247 lines ...
[7] /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_CAPI_test.go:36
[7]   Clusterctl Upgrade Spec [from v1alpha4]
[7]   /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_CAPI_test.go:138
[7]     Should create a management cluster and then upgrade all the providers [It]
[7]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.0/e2e/clusterctl_upgrade.go:147
[7] 
[7]     failed to run clusterctl upgrade
[7]     Unexpected error:
[7]         <*errors.fundamental | 0xc000d58d80>: {
[7]             msg: "unable to upgrade CRD \"awsclustercontrolleridentities.infrastructure.cluster.x-k8s.io\" because the new CRD does not contain the storage version \"v1alpha4\" of the current CRD, thus not allowing CR migration",
[7]             stack: [0x203ef95, 0x203ea29, 0x2061954, 0x205f765, 0x2092579, 0x20966d8, 0x209972d, 0x210a7d7, 0x948d71, 0x948765, 0x947e5b, 0x94db4a, 0x94d547, 0x9599e8, 0x959705, 0x958da5, 0x95b172, 0x967569, 0x967376, 0x216a53b, 0x520862, 0x46d9a1],
[7]         }
[7]         unable to upgrade CRD "awsclustercontrolleridentities.infrastructure.cluster.x-k8s.io" because the new CRD does not contain the storage version "v1alpha4" of the current CRD, thus not allowing CR migration
[7]     occurred
... skipping 343 lines ...
[7] 
[7] JUnit report was created: /logs/artifacts/junit.e2e_suite.7.xml
[7] 
[7] 
[7] Summarizing 1 Failure:
[7] 
[7] [Fail] [unmanaged] [Cluster API Framework] Clusterctl Upgrade Spec [from v1alpha4] [It] Should create a management cluster and then upgrade all the providers 
[7] /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.0/framework/clusterctl/client.go:142
[7] 
[7] Ran 4 of 4 Specs in 4314.088 seconds
[7] FAIL! -- 3 Passed | 1 Failed | 0 Pending | 0 Skipped
[7] --- FAIL: TestE2E (4314.13s)
[7] FAIL
[4] STEP: Upgrading the Cluster topology
[4] INFO: Patching the new Kubernetes version to Cluster topology
[4] INFO: Waiting for control-plane machines to have the upgraded Kubernetes version
[4] STEP: Ensuring all control-plane machines have upgraded kubernetes version v1.24.0
[5] STEP: Deleting namespace used for hosting the "clusterctl-upgrade" test spec
[5] INFO: Deleting namespace clusterctl-upgrade-a7f8ad
... skipping 15 lines ...
[5] STEP: Deleting namespace used for hosting the "" test spec
[5] INFO: Deleting namespace functional-efs-support-xpjvzi
[5] 
[5] JUnit report was created: /logs/artifacts/junit.e2e_suite.5.xml
[5] 
[5] Ran 3 of 5 Specs in 4318.772 seconds
[5] SUCCESS! -- 3 Passed | 0 Failed | 2 Pending | 0 Skipped
[5] PASS
[3] INFO: Waiting for kube-proxy to have the upgraded kubernetes version
[3] STEP: Ensuring kube-proxy has the correct image
[3] INFO: Waiting for CoreDNS to have the upgraded image tag
[3] STEP: Ensuring CoreDNS has the correct image
[3] INFO: Waiting for etcd to have the upgraded image tag
... skipping 65 lines ...
[3]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.0/e2e/cluster_upgrade.go:117
[3] ------------------------------
[3] 
[3] JUnit report was created: /logs/artifacts/junit.e2e_suite.3.xml
[3] 
[3] Ran 3 of 3 Specs in 4722.132 seconds
[3] SUCCESS! -- 3 Passed | 0 Failed | 0 Pending | 0 Skipped
[3] PASS
[2] STEP: Deleting namespace used for hosting the "k8s-upgrade-and-conformance" test spec
[2] INFO: Deleting namespace k8s-upgrade-and-conformance-1qmqjf
[2] [AfterEach] Cluster Upgrade Spec - HA Control Plane Cluster [K8s-Upgrade]
[2]   /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_CAPI_test.go:233
[2] STEP: Node 2 released resources: {ec2-normal:0, vpc:2, eip:2, ngw:2, igw:2, classiclb:2, ec2-GPU:0, volume-gp2:0}
... skipping 27 lines ...
[6]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.0/e2e/cluster_upgrade.go:117
[6] ------------------------------
[6] 
[6] JUnit report was created: /logs/artifacts/junit.e2e_suite.6.xml
[6] 
[6] Ran 3 of 3 Specs in 5002.104 seconds
[6] SUCCESS! -- 3 Passed | 0 Failed | 0 Pending | 0 Skipped
[6] PASS
[4] INFO: Waiting for kube-proxy to have the upgraded Kubernetes version
[4] STEP: Ensuring kube-proxy has the correct image
[4] INFO: Waiting for CoreDNS to have the upgraded image tag
[4] STEP: Ensuring CoreDNS has the correct image
[4] INFO: Waiting for etcd to have the upgraded image tag
... skipping 2 lines ...
[2] STEP: Deleting namespace used for hosting the "" test spec
[2] INFO: Deleting namespace functional-gpu-cluster-7s7sdc
[2] 
[2] JUnit report was created: /logs/artifacts/junit.e2e_suite.2.xml
[2] 
[2] Ran 4 of 4 Specs in 5387.594 seconds
[2] SUCCESS! -- 4 Passed | 0 Failed | 0 Pending | 0 Skipped
[2] PASS
[4] STEP: Waiting until nodes are ready
[4] STEP: PASSED!
[4] [AfterEach] Cluster Upgrade Spec - HA control plane with workers [K8s-Upgrade] [ClusterClass]
[4]   /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.0/e2e/cluster_upgrade.go:240
[4] STEP: Dumping logs from the "k8s-upgrade-and-conformance-m7pwns" workload cluster
... skipping 25 lines ...
[4] STEP: Deleting namespace used for hosting the "" test spec
[4] INFO: Deleting namespace functional-test-ignition-17rgrm
[4] 
[4] JUnit report was created: /logs/artifacts/junit.e2e_suite.4.xml
[4] 
[4] Ran 4 of 6 Specs in 6504.249 seconds
[4] SUCCESS! -- 4 Passed | 0 Failed | 2 Pending | 0 Skipped
[4] PASS
[1] folder created for eks clusters: /logs/artifacts/clusters/bootstrap/aws-resources
[1] STEP: Tearing down the management cluster
[1] STEP: Deleting cluster-api-provider-aws-sigs-k8s-io CloudFormation stack
[1] 
[1] JUnit report was created: /logs/artifacts/junit.e2e_suite.1.xml
[1] 
[1] Ran 5 of 5 Specs in 6982.909 seconds
[1] SUCCESS! -- 5 Passed | 0 Failed | 0 Pending | 0 Skipped
[1] PASS

Ginkgo ran 1 suite in 1h58m0.652902619s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 3 lines ...
To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc

real	118m0.661s
user	29m31.434s
sys	8m2.582s
make: *** [Makefile:397: test-e2e] Error 1
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
... skipping 3 lines ...