This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 24 succeeded
Started2022-07-27 15:02
Elapsed1h25m
Revision3b32ca01169ab2cc1510398dd82b0669b9002db8

Test Failures


capa-e2e [unmanaged] [functional] Multitenancy test should create cluster with nested assumed role 21s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capa\-e2e\s\[unmanaged\]\s\[functional\]\sMultitenancy\stest\sshould\screate\scluster\swith\snested\sassumed\srole$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:163
Expected success, but got an error:
    <*errors.withStack | 0xc000fe2a80>: {
        error: <*exec.ExitError | 0xc000fe6c00>{
            ProcessState: {
                pid: 30059,
                status: 256,
                rusage: {
                    Utime: {Sec: 0, Usec: 695726},
                    Stime: {Sec: 0, Usec: 355215},
                    Maxrss: 106148,
                    Ixrss: 0,
                    Idrss: 0,
                    Isrss: 0,
                    Minflt: 15134,
                    Majflt: 0,
                    Nswap: 0,
                    Inblock: 0,
                    Oublock: 28456,
                    Msgsnd: 0,
                    Msgrcv: 0,
                    Nsignals: 0,
                    Nvcsw: 3544,
                    Nivcsw: 2122,
                },
            },
            Stderr: nil,
        },
        stack: [0x1a89ce0, 0x1a8a230, 0x1c07e2c, 0x1f89ef4, 0x2031578, 0x94c871, 0x94c265, 0x94b95b, 0x95164a, 0x951047, 0x95d4e8, 0x95d205, 0x95c8a5, 0x95ec72, 0x96b089, 0x96ae96, 0x20336fb, 0x520602, 0x46d9a1],
    }
    exit status 1
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/framework/clusterctl/clusterctl_helpers.go:273
				
				Click to see stdout/stderrfrom junit.e2e_suite.14.xml

Filter through log files | View test history on testgrid


Show 24 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 577 lines ...
[1]  ✓ Installing CNI 🔌
[1]  • Installing StorageClass 💾  ...
[1]  ✓ Installing StorageClass 💾
[1] INFO: The kubeconfig file for the kind cluster is /tmp/e2e-kind1602129886
[1] INFO: Loading image: "gcr.io/k8s-staging-cluster-api/capa-manager:e2e"
[1] INFO: Loading image: "quay.io/jetstack/cert-manager-cainjector:v1.7.2"
[1] INFO: [WARNING] Unable to load image "quay.io/jetstack/cert-manager-cainjector:v1.7.2" into the kind cluster "test-0eajum": error saving image "quay.io/jetstack/cert-manager-cainjector:v1.7.2" to "/tmp/image-tar1770053570/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] INFO: Loading image: "quay.io/jetstack/cert-manager-webhook:v1.7.2"
[1] INFO: [WARNING] Unable to load image "quay.io/jetstack/cert-manager-webhook:v1.7.2" into the kind cluster "test-0eajum": error saving image "quay.io/jetstack/cert-manager-webhook:v1.7.2" to "/tmp/image-tar3530470660/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] INFO: Loading image: "quay.io/jetstack/cert-manager-controller:v1.7.2"
[1] INFO: [WARNING] Unable to load image "quay.io/jetstack/cert-manager-controller:v1.7.2" into the kind cluster "test-0eajum": error saving image "quay.io/jetstack/cert-manager-controller:v1.7.2" to "/tmp/image-tar1716781464/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] INFO: Loading image: "registry.k8s.io/cluster-api/cluster-api-controller:v1.1.2"
[1] INFO: [WARNING] Unable to load image "registry.k8s.io/cluster-api/cluster-api-controller:v1.1.2" into the kind cluster "test-0eajum": error saving image "registry.k8s.io/cluster-api/cluster-api-controller:v1.1.2" to "/tmp/image-tar1901130334/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.1.2"
[1] INFO: [WARNING] Unable to load image "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.1.2" into the kind cluster "test-0eajum": error saving image "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.1.2" to "/tmp/image-tar34786749/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.1.2"
[1] INFO: [WARNING] Unable to load image "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.1.2" into the kind cluster "test-0eajum": error saving image "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.1.2" to "/tmp/image-tar3010327906/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] STEP: Setting environment variable: key=AWS_B64ENCODED_CREDENTIALS, value=*******
[1] STEP: Writing AWS service quotas to a file for parallel tests
[1] STEP: Initializing the bootstrap cluster
[1] INFO: clusterctl init --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure aws
[1] STEP: Waiting for provider controllers to be running
[1] STEP: Waiting for deployment capa-system/capa-controller-manager to be available
... skipping 720 lines ...
[6] configmap/cni-cluster-jo3yev-crs-0 created
[6] clusterresourceset.addons.cluster.x-k8s.io/cluster-jo3yev-crs-0 created
[6] awsclusterroleidentity.infrastructure.cluster.x-k8s.io/capamultitenancyjump created
[6] awsclusterroleidentity.infrastructure.cluster.x-k8s.io/capamultitenancynested created
[6] 
[6] INFO: Waiting for the cluster infrastructure to be provisioned
[14] Error from server (AlreadyExists): error when creating "STDIN": awsclusterroleidentities.infrastructure.cluster.x-k8s.io "capamultitenancyjump" already exists
[14] Error from server (AlreadyExists): error when creating "STDIN": awsclusterroleidentities.infrastructure.cluster.x-k8s.io "capamultitenancynested" already exists
[14] 
[14] STEP: Dumping all the Cluster API resources in the "functional-multitenancy-nested-hpxyvm" namespace
[6] STEP: Waiting for cluster to enter the provisioned phase
[14] STEP: Dumping all EC2 instances in the "functional-multitenancy-nested-hpxyvm" namespace
[14] STEP: Deleting all clusters in the "functional-multitenancy-nested-hpxyvm" namespace with intervals ["20m" "10s"]
[14] STEP: Deleting cluster functional-multitenancy-nested-n73ry9
... skipping 8 lines ...
[14] /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:50
[14]   Multitenancy test
[14]   /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:162
[14]     should create cluster with nested assumed role [It]
[14]     /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:163
[14] 
[14]     Expected success, but got an error:
[14]         <*errors.withStack | 0xc000fe2a80>: {
[14]             error: <*exec.ExitError | 0xc000fe6c00>{
[14]                 ProcessState: {
[14]                     pid: 30059,
[14]                     status: 256,
[14]                     rusage: {
[14]                         Utime: {Sec: 0, Usec: 695726},
[14]                         Stime: {Sec: 0, Usec: 355215},
... skipping 671 lines ...
[5]     /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:709
[5] ------------------------------
[5] 
[5] JUnit report was created: /logs/artifacts/junit.e2e_suite.5.xml
[5] 
[5] Ran 1 of 3 Specs in 1636.380 seconds
[5] SUCCESS! -- 1 Passed | 0 Failed | 2 Pending | 0 Skipped
[5] PASS
[8] STEP: Retrieving IDs of dynamically provisioned volumes.
[8] STEP: Ensuring dynamically provisioned volumes exists
[8] STEP: Deleting LB service
[8] STEP: Deleting the Clusters
[8] STEP: Deleting cluster only-csi-external-upgrade-6bdzx7
... skipping 25 lines ...
[4]     /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:466
[4] ------------------------------
[4] 
[4] JUnit report was created: /logs/artifacts/junit.e2e_suite.4.xml
[4] 
[4] Ran 1 of 1 Specs in 1683.118 seconds
[4] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[4] PASS
[18] STEP: Node 18 acquired resources: {ec2-normal:0, vpc:2, eip:2, ngw:2, igw:2, classiclb:2, ec2-GPU:0, volume-gp2:0}
[18] [BeforeEach] Clusterctl Upgrade Spec [from latest v1beta1 release to main]
[18]   /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/clusterctl_upgrade.go:122
[18] STEP: Creating a namespace for hosting the "clusterctl-upgrade" test spec
[18] INFO: Creating namespace clusterctl-upgrade-qj2t9u
... skipping 39 lines ...
[2]     /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_clusterclass_test.go:96
[2] ------------------------------
[2] 
[2] JUnit report was created: /logs/artifacts/junit.e2e_suite.2.xml
[2] 
[2] Ran 1 of 1 Specs in 1748.984 seconds
[2] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[2] PASS
[13] INFO: Waiting for control plane to be initialized
[13] INFO: Waiting for the first control plane machine managed by quick-start-3f4zha/quick-start-6usfwf-control-plane to be provisioned
[13] STEP: Waiting for one control plane node to exist
[11] STEP: Deleting namespace used for hosting the "mhc-remediation" test spec
[11] INFO: Deleting namespace mhc-remediation-m7k8f8
... skipping 10 lines ...
[11]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/mhc_remediations.go:82
[11] ------------------------------
[11] 
[11] JUnit report was created: /logs/artifacts/junit.e2e_suite.11.xml
[11] 
[11] Ran 1 of 1 Specs in 1811.258 seconds
[11] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[11] PASS
[7] STEP: Node 7 acquired resources: {ec2-normal:0, vpc:2, eip:2, ngw:2, igw:2, classiclb:2, ec2-GPU:0, volume-gp2:0}
[7] [BeforeEach] Clusterctl Upgrade Spec [from v1alpha3]
[7]   /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/clusterctl_upgrade.go:122
[7] STEP: Creating a namespace for hosting the "clusterctl-upgrade" test spec
[7] INFO: Creating namespace clusterctl-upgrade-8m0605
... skipping 60 lines ...
[16]     /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:551
[16] ------------------------------
[16] 
[16] JUnit report was created: /logs/artifacts/junit.e2e_suite.16.xml
[16] 
[16] Ran 1 of 1 Specs in 1837.800 seconds
[16] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[16] PASS
[14] INFO: Waiting for control plane to be ready
[14] INFO: Waiting for control plane k8s-upgrade-and-conformance-92oggn/k8s-upgrade-and-conformance-cpo4bk-control-plane to be ready (implies underlying nodes to be ready as well)
[14] STEP: Waiting for the control plane to be ready
[9] STEP: Deleting namespace used for hosting the "self-hosted" test spec
[9] INFO: Deleting namespace self-hosted-5bzedx
... skipping 10 lines ...
[9]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/self_hosted.go:80
[9] ------------------------------
[9] 
[9] JUnit report was created: /logs/artifacts/junit.e2e_suite.9.xml
[9] 
[9] Ran 1 of 1 Specs in 1858.638 seconds
[9] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[9] PASS
[10] STEP: Node 10 acquired resources: {ec2-normal:0, vpc:2, eip:2, ngw:2, igw:2, classiclb:2, ec2-GPU:0, volume-gp2:0}
[10] [BeforeEach] Cluster Upgrade Spec - HA Control Plane Cluster [K8s-Upgrade]
[10]   /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/cluster_upgrade.go:81
[10] STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec
[10] INFO: Creating namespace k8s-upgrade-and-conformance-rwb32i
... skipping 97 lines ...
[12]     /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:260
[12] ------------------------------
[12] 
[12] JUnit report was created: /logs/artifacts/junit.e2e_suite.12.xml
[12] 
[12] Ran 1 of 2 Specs in 2051.519 seconds
[12] SUCCESS! -- 1 Passed | 0 Failed | 1 Pending | 0 Skipped
[12] PASS
[3] STEP: Node 3 acquired resources: {ec2-normal:0, vpc:2, eip:2, ngw:2, igw:2, classiclb:2, ec2-GPU:0, volume-gp2:0}
[3] [BeforeEach] Cluster Upgrade Spec - HA control plane with scale in rollout [K8s-Upgrade]
[3]   /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/cluster_upgrade.go:81
[3] STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec
[3] INFO: Creating namespace k8s-upgrade-and-conformance-ouiptn
... skipping 43 lines ...
[8]     /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:328
[8] ------------------------------
[8] 
[8] JUnit report was created: /logs/artifacts/junit.e2e_suite.8.xml
[8] 
[8] Ran 1 of 1 Specs in 2072.982 seconds
[8] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[8] PASS
[6] INFO: Waiting for the machine pools to be provisioned
[6] STEP: PASSED!
[6] [AfterEach] Running the quick-start spec with ClusterClass
[6]   /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/quick_start.go:107
[6] STEP: Dumping logs from the "quick-start-scyb6h" workload cluster
... skipping 17 lines ...
[15]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/mhc_remediations.go:114
[15] ------------------------------
[15] 
[15] JUnit report was created: /logs/artifacts/junit.e2e_suite.15.xml
[15] 
[15] Ran 1 of 1 Specs in 2106.356 seconds
[15] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[15] PASS
[10] INFO: Waiting for control plane to be initialized
[10] INFO: Waiting for the first control plane machine managed by k8s-upgrade-and-conformance-rwb32i/k8s-upgrade-and-conformance-rv24pz-control-plane to be provisioned
[10] STEP: Waiting for one control plane node to exist
[19] INFO: Waiting for control plane to be ready
[19] INFO: Waiting for control plane clusterctl-upgrade-rm21u5/clusterctl-upgrade-vo9ysk-control-plane to be ready (implies underlying nodes to be ready as well)
... skipping 19 lines ...
[20]     /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:397
[20] ------------------------------
[20] 
[20] JUnit report was created: /logs/artifacts/junit.e2e_suite.20.xml
[20] 
[20] Ran 1 of 1 Specs in 2172.015 seconds
[20] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[20] PASS
[19] INFO: Waiting for the machine deployments to be provisioned
[19] STEP: Waiting for the workload nodes to exist
[18] INFO: Waiting for the machine pools to be provisioned
[18] STEP: Turning the workload cluster into a management cluster with older versions of providers
[18] INFO: Downloading clusterctl binary from https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.1.2/clusterctl-linux-amd64
... skipping 106 lines ...
[13]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/quick_start.go:77
[13] ------------------------------
[13] 
[13] JUnit report was created: /logs/artifacts/junit.e2e_suite.13.xml
[13] 
[13] Ran 2 of 2 Specs in 2367.609 seconds
[13] SUCCESS! -- 2 Passed | 0 Failed | 0 Pending | 0 Skipped
[13] PASS
[7] INFO: Creating log watcher for controller capa-system/capa-controller-manager, pod capa-controller-manager-5ccb45447b-jbpq8, container manager
[7] INFO: Creating log watcher for controller capa-system/capa-controller-manager, pod capa-controller-manager-5ccb45447b-jbpq8, container kube-rbac-proxy
[7] STEP: Waiting for deployment capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager to be available
[7] INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-5f8984f4f5-7mrh5, container kube-rbac-proxy
[7] INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-5f8984f4f5-7mrh5, container manager
... skipping 60 lines ...
[17]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/machine_pool.go:76
[17] ------------------------------
[17] 
[17] JUnit report was created: /logs/artifacts/junit.e2e_suite.17.xml
[17] 
[17] Ran 1 of 1 Specs in 2440.917 seconds
[17] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[17] PASS
[14] INFO: Waiting for CoreDNS to have the upgraded image tag
[14] STEP: Ensuring CoreDNS has the correct image
[14] INFO: Waiting for etcd to have the upgraded image tag
[14] STEP: Upgrading the machine deployment
[14] INFO: Patching the new kubernetes version to Machine Deployment k8s-upgrade-and-conformance-92oggn/k8s-upgrade-and-conformance-cpo4bk-md-0
... skipping 24 lines ...
[6]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/quick_start.go:77
[6] ------------------------------
[6] 
[6] JUnit report was created: /logs/artifacts/junit.e2e_suite.6.xml
[6] 
[6] Ran 2 of 2 Specs in 2593.369 seconds
[6] SUCCESS! -- 2 Passed | 0 Failed | 0 Pending | 0 Skipped
[6] PASS
[3] INFO: Waiting for control plane k8s-upgrade-and-conformance-ouiptn/k8s-upgrade-and-conformance-s4pbz1-control-plane to be ready (implies underlying nodes to be ready as well)
[3] STEP: Waiting for the control plane to be ready
[3] INFO: Waiting for the machine deployments to be provisioned
[3] STEP: Waiting for the workload nodes to exist
[3] INFO: Waiting for the machine pools to be provisioned
... skipping 147 lines ...
[18]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/clusterctl_upgrade.go:147
[18] ------------------------------
[18] 
[18] JUnit report was created: /logs/artifacts/junit.e2e_suite.18.xml
[18] 
[18] Ran 1 of 1 Specs in 3659.742 seconds
[18] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[18] PASS
[3] INFO: Waiting for kube-proxy to have the upgraded kubernetes version
[3] STEP: Ensuring kube-proxy has the correct image
[3] INFO: Waiting for CoreDNS to have the upgraded image tag
[3] STEP: Ensuring CoreDNS has the correct image
[3] INFO: Waiting for etcd to have the upgraded image tag
... skipping 44 lines ...
[14] 
[14] JUnit report was created: /logs/artifacts/junit.e2e_suite.14.xml
[14] 
[14] 
[14] Summarizing 1 Failure:
[14] 
[14] [Fail] [unmanaged] [functional] Multitenancy test [It] should create cluster with nested assumed role 
[14] /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/framework/clusterctl/clusterctl_helpers.go:273
[14] 
[14] Ran 2 of 2 Specs in 3723.735 seconds
[14] FAIL! -- 1 Passed | 1 Failed | 0 Pending | 0 Skipped
[14] --- FAIL: TestE2E (3723.77s)
[14] FAIL
[19] STEP: Deleting namespace used for hosting the "clusterctl-upgrade" test spec
[19] INFO: Deleting namespace clusterctl-upgrade-rm21u5
[19] [AfterEach] Clusterctl Upgrade Spec [from v1alpha4]
[19]   /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_CAPI_test.go:183
[19] STEP: Node 19 released resources: {ec2-normal:0, vpc:2, eip:2, ngw:2, igw:2, classiclb:2, ec2-GPU:0, volume-gp2:0}
[19] 
... skipping 6 lines ...
[19]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/clusterctl_upgrade.go:147
[19] ------------------------------
[19] 
[19] JUnit report was created: /logs/artifacts/junit.e2e_suite.19.xml
[19] 
[19] Ran 1 of 1 Specs in 3830.754 seconds
[19] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[19] PASS
[3] STEP: Deleting namespace used for hosting the "k8s-upgrade-and-conformance" test spec
[3] INFO: Deleting namespace k8s-upgrade-and-conformance-ouiptn
[3] 
[3] • [SLOW TEST:2747.418 seconds]
[3] [unmanaged] [Cluster API Framework]
... skipping 24 lines ...
[7]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/e2e/clusterctl_upgrade.go:147
[7] ------------------------------
[7] 
[7] JUnit report was created: /logs/artifacts/junit.e2e_suite.7.xml
[7] 
[7] Ran 1 of 1 Specs in 4036.700 seconds
[7] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[7] PASS
[10] STEP: Deleting namespace used for hosting the "" test spec
[10] INFO: Deleting namespace functional-gpu-cluster-ph5t0p
[10] 
[10] JUnit report was created: /logs/artifacts/junit.e2e_suite.10.xml
[10] 
[10] Ran 2 of 2 Specs in 4161.499 seconds
[10] SUCCESS! -- 2 Passed | 0 Failed | 0 Pending | 0 Skipped
[10] PASS
[3] STEP: Deleting namespace used for hosting the "" test spec
[3] INFO: Deleting namespace functional-test-ignition-866vw2
[3] 
[3] JUnit report was created: /logs/artifacts/junit.e2e_suite.3.xml
[3] 
[3] Ran 2 of 3 Specs in 4421.757 seconds
[3] SUCCESS! -- 2 Passed | 0 Failed | 1 Pending | 0 Skipped
[3] PASS
[1] folder created for eks clusters: /logs/artifacts/clusters/bootstrap/aws-resources
[1] STEP: Tearing down the management cluster
[1] STEP: Deleting cluster-api-provider-aws-sigs-k8s-io CloudFormation stack
[1] 
[1] JUnit report was created: /logs/artifacts/junit.e2e_suite.1.xml
[1] 
[1] Ran 1 of 1 Specs in 4879.681 seconds
[1] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[1] PASS

Ginkgo ran 1 suite in 1h22m52.122394325s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 3 lines ...
To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc

real	82m52.134s
user	27m29.905s
sys	7m15.387s
make: *** [Makefile:409: test-e2e] Error 1
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
... skipping 3 lines ...