This job view page is being replaced by Spyglass soon. Check out the new job view.
PRsedefsavas: Bump Kubernetes to v1.24.0 and fix AWSMachinePool minsize
ResultFAILURE
Tests 1 failed / 23 succeeded
Started2022-07-19 10:28
Elapsed1h46m
Revisionb1165081df795c415a6b9dc98b5899c9fae00126
Refs 3468

Test Failures


capa-e2e [unmanaged] [Cluster API Framework] Machine Pool Spec Should successfully create a cluster with machine pool machines 56m24s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capa\-e2e\s\[unmanaged\]\s\[Cluster\sAPI\sFramework\]\sMachine\sPool\sSpec\sShould\ssuccessfully\screate\sa\scluster\swith\smachine\spool\smachines$'
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.5/e2e/machine_pool.go:76
Timed out after 2400.000s.
Expected
    <int>: 1
to equal
    <int>: 0
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.5/framework/machinepool_helpers.go:90
				
				Click to see stdout/stderrfrom junit.e2e_suite.6.xml

Filter through log files | View test history on testgrid


Show 23 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 467 lines ...
[1]  ✓ Installing CNI 🔌
[1]  • Installing StorageClass 💾  ...
[1]  ✓ Installing StorageClass 💾
[1] INFO: The kubeconfig file for the kind cluster is /tmp/e2e-kind619650916
[1] INFO: Loading image: "gcr.io/k8s-staging-cluster-api/capa-manager:e2e"
[1] INFO: Loading image: "quay.io/jetstack/cert-manager-cainjector:v1.7.2"
[1] INFO: [WARNING] Unable to load image "quay.io/jetstack/cert-manager-cainjector:v1.7.2" into the kind cluster "test-qkzicl": error saving image "quay.io/jetstack/cert-manager-cainjector:v1.7.2" to "/tmp/image-tar4004075153/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] INFO: Loading image: "quay.io/jetstack/cert-manager-webhook:v1.7.2"
[1] INFO: [WARNING] Unable to load image "quay.io/jetstack/cert-manager-webhook:v1.7.2" into the kind cluster "test-qkzicl": error saving image "quay.io/jetstack/cert-manager-webhook:v1.7.2" to "/tmp/image-tar2493292641/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] INFO: Loading image: "quay.io/jetstack/cert-manager-controller:v1.7.2"
[1] INFO: [WARNING] Unable to load image "quay.io/jetstack/cert-manager-controller:v1.7.2" into the kind cluster "test-qkzicl": error saving image "quay.io/jetstack/cert-manager-controller:v1.7.2" to "/tmp/image-tar1916159560/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] INFO: Loading image: "registry.k8s.io/cluster-api/cluster-api-controller:v1.1.5"
[1] INFO: [WARNING] Unable to load image "registry.k8s.io/cluster-api/cluster-api-controller:v1.1.5" into the kind cluster "test-qkzicl": error saving image "registry.k8s.io/cluster-api/cluster-api-controller:v1.1.5" to "/tmp/image-tar3407674011/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.1.5"
[1] INFO: [WARNING] Unable to load image "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.1.5" into the kind cluster "test-qkzicl": error saving image "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.1.5" to "/tmp/image-tar3700427958/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.1.5"
[1] INFO: [WARNING] Unable to load image "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.1.5" into the kind cluster "test-qkzicl": error saving image "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.1.5" to "/tmp/image-tar1182682777/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] STEP: Setting environment variable: key=AWS_B64ENCODED_CREDENTIALS, value=*******
[1] STEP: Writing AWS service quotas to a file for parallel tests
[1] STEP: Initializing the bootstrap cluster
[1] INFO: clusterctl init --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure aws
[1] STEP: Waiting for provider controllers to be running
[1] STEP: Waiting for deployment capa-system/capa-controller-manager to be available
... skipping 1819 lines ...
[5]     /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_clusterclass_test.go:96
[5] ------------------------------
[5] 
[5] JUnit report was created: /logs/artifacts/junit.e2e_suite.5.xml
[5] 
[5] Ran 4 of 4 Specs in 4700.342 seconds
[5] SUCCESS! -- 4 Passed | 0 Failed | 0 Pending | 0 Skipped
[5] PASS
[2] INFO: Waiting for kube-proxy to have the upgraded kubernetes version
[2] STEP: Ensuring kube-proxy has the correct image
[2] INFO: Waiting for CoreDNS to have the upgraded image tag
[2] STEP: Ensuring CoreDNS has the correct image
[2] INFO: Waiting for etcd to have the upgraded image tag
... skipping 162 lines ...
[6] 
[6] JUnit report was created: /logs/artifacts/junit.e2e_suite.6.xml
[6] 
[6] 
[6] Summarizing 1 Failure:
[6] 
[6] [Fail] [unmanaged] [Cluster API Framework] Machine Pool Spec [It] Should successfully create a cluster with machine pool machines 
[6] /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.5/framework/machinepool_helpers.go:90
[6] 
[6] Ran 2 of 2 Specs in 5147.845 seconds
[6] FAIL! -- 1 Passed | 1 Failed | 0 Pending | 0 Skipped
[6] --- FAIL: TestE2E (5147.87s)
[6] FAIL
[1] STEP: Deleting namespace used for hosting the "quick-start" test spec
[1] INFO: Deleting namespace quick-start-zrrjxu
[1] [AfterEach] Running the quick-start spec with ClusterClass
[1]   /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_CAPI_quick_clusterclass_test.go:67
[1] STEP: Node 1 released resources: {ec2-normal:4, vpc:1, eip:3, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0}
[1] [AfterEach] [unmanaged] [Cluster API Framework] [smoke] [PR-Blocking]
... skipping 24 lines ...
[4] STEP: Deleting namespace used for hosting the "" test spec
[4] INFO: Deleting namespace functional-test-ignition-lk76xr
[4] 
[4] JUnit report was created: /logs/artifacts/junit.e2e_suite.4.xml
[4] 
[4] Ran 4 of 6 Specs in 5169.126 seconds
[4] SUCCESS! -- 4 Passed | 0 Failed | 2 Pending | 0 Skipped
[4] PASS
[7] STEP: Deleting namespace used for hosting the "k8s-upgrade-and-conformance" test spec
[7] INFO: Deleting namespace k8s-upgrade-and-conformance-2gb3vh
[7] 
[7] • [SLOW TEST:2225.966 seconds]
[7] [unmanaged] [Cluster API Framework]
... skipping 4 lines ...
[7]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.5/e2e/cluster_upgrade.go:115
[7] ------------------------------
[7] 
[7] JUnit report was created: /logs/artifacts/junit.e2e_suite.7.xml
[7] 
[7] Ran 3 of 3 Specs in 5191.980 seconds
[7] SUCCESS! -- 3 Passed | 0 Failed | 0 Pending | 0 Skipped
[7] PASS
[3] STEP: Deleting namespace used for hosting the "k8s-upgrade-and-conformance" test spec
[3] INFO: Deleting namespace k8s-upgrade-and-conformance-aekunm
[3] [AfterEach] Cluster Upgrade Spec - Single control plane with workers [K8s-Upgrade]
[3]   /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_CAPI_test.go:183
[3] STEP: Node 3 released resources: {ec2-normal:0, vpc:2, eip:2, ngw:2, igw:2, classiclb:2, ec2-GPU:0, volume-gp2:0}
... skipping 7 lines ...
[3]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.5/e2e/cluster_upgrade.go:115
[3] ------------------------------
[3] 
[3] JUnit report was created: /logs/artifacts/junit.e2e_suite.3.xml
[3] 
[3] Ran 3 of 3 Specs in 5370.260 seconds
[3] SUCCESS! -- 3 Passed | 0 Failed | 0 Pending | 0 Skipped
[3] PASS
[2] STEP: Deleting namespace used for hosting the "" test spec
[2] INFO: Deleting namespace functional-gpu-cluster-vji668
[2] 
[2] JUnit report was created: /logs/artifacts/junit.e2e_suite.2.xml
[2] 
[2] Ran 4 of 4 Specs in 5684.991 seconds
[2] SUCCESS! -- 4 Passed | 0 Failed | 0 Pending | 0 Skipped
[2] PASS
[1] folder created for eks clusters: /logs/artifacts/clusters/bootstrap/aws-resources
[1] STEP: Tearing down the management cluster
[1] STEP: Deleting cluster-api-provider-aws-sigs-k8s-io CloudFormation stack
[1] 
[1] JUnit report was created: /logs/artifacts/junit.e2e_suite.1.xml
[1] 
[1] Ran 4 of 6 Specs in 6137.367 seconds
[1] SUCCESS! -- 4 Passed | 0 Failed | 2 Pending | 0 Skipped
[1] PASS

Ginkgo ran 1 suite in 1h43m57.399215029s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 3 lines ...
To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc

real	103m57.408s
user	33m12.806s
sys	9m0.116s
make: *** [Makefile:409: test-e2e] Error 1
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
... skipping 3 lines ...