This job view page is being replaced by Spyglass soon. Check out the new job view.
PRsedefsavas: Bump Kubernetes to v1.24.0 and fix AWSMachinePool minsize
ResultFAILURE
Tests 0 failed / 1 succeeded
Started2022-07-27 19:45
Elapsed32m0s
Revision0b109b4dfd3fb3f00d221e29c52f38f359f9db5c
Refs 3468

No Test Failures!


Show 1 Passed Tests

Show 27 Skipped Tests

Error lines from build-log.txt

... skipping 507 lines ...
[1]  ✓ Installing CNI 🔌
[1]  • Installing StorageClass 💾  ...
[1]  ✓ Installing StorageClass 💾
[1] INFO: The kubeconfig file for the kind cluster is /tmp/e2e-kind1618305020
[1] INFO: Loading image: "gcr.io/k8s-staging-cluster-api/capa-manager:e2e"
[1] INFO: Loading image: "quay.io/jetstack/cert-manager-cainjector:v1.7.2"
[1] INFO: [WARNING] Unable to load image "quay.io/jetstack/cert-manager-cainjector:v1.7.2" into the kind cluster "test-i99ndl": error saving image "quay.io/jetstack/cert-manager-cainjector:v1.7.2" to "/tmp/image-tar1516037448/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] INFO: Loading image: "quay.io/jetstack/cert-manager-webhook:v1.7.2"
[1] INFO: [WARNING] Unable to load image "quay.io/jetstack/cert-manager-webhook:v1.7.2" into the kind cluster "test-i99ndl": error saving image "quay.io/jetstack/cert-manager-webhook:v1.7.2" to "/tmp/image-tar2826927855/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] INFO: Loading image: "quay.io/jetstack/cert-manager-controller:v1.7.2"
[1] INFO: [WARNING] Unable to load image "quay.io/jetstack/cert-manager-controller:v1.7.2" into the kind cluster "test-i99ndl": error saving image "quay.io/jetstack/cert-manager-controller:v1.7.2" to "/tmp/image-tar490212029/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] INFO: Loading image: "registry.k8s.io/cluster-api/cluster-api-controller:v1.1.5"
[1] INFO: [WARNING] Unable to load image "registry.k8s.io/cluster-api/cluster-api-controller:v1.1.5" into the kind cluster "test-i99ndl": error saving image "registry.k8s.io/cluster-api/cluster-api-controller:v1.1.5" to "/tmp/image-tar4025499788/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.1.5"
[1] INFO: [WARNING] Unable to load image "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.1.5" into the kind cluster "test-i99ndl": error saving image "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.1.5" to "/tmp/image-tar231200154/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.1.5"
[1] INFO: [WARNING] Unable to load image "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.1.5" into the kind cluster "test-i99ndl": error saving image "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.1.5" to "/tmp/image-tar2865083116/image.tar": unable to read image data: Error response from daemon: reference does not exist
[1] STEP: Setting environment variable: key=AWS_B64ENCODED_CREDENTIALS, value=*******
[1] STEP: Writing AWS service quotas to a file for parallel tests
[1] STEP: Initializing the bootstrap cluster
[1] INFO: clusterctl init --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure aws
[1] STEP: Waiting for provider controllers to be running
[1] STEP: Waiting for deployment capa-system/capa-controller-manager to be available
... skipping 43 lines ...
[3] STEP: Setting environment variable: key=AWS_SSH_KEY_NAME, value=cluster-api-provider-aws-sigs-k8s-io
[3] STEP: Setting environment variable: key=AWS_B64ENCODED_CREDENTIALS, value=*******
[3] 
[3] JUnit report was created: /logs/artifacts/junit.e2e_suite.3.xml
[3] 
[3] Ran 0 of 0 Specs in 315.993 seconds
[3] SUCCESS! -- 0 Passed | 0 Failed | 0 Pending | 0 Skipped
[3] PASS | FOCUSED
[4] STEP: Setting environment variable: key=AWS_AVAILABILITY_ZONE_1, value=us-west-2a
[4] STEP: Setting environment variable: key=AWS_AVAILABILITY_ZONE_2, value=us-west-2b
[4] STEP: Setting environment variable: key=AWS_REGION, value=us-west-2
[4] STEP: Setting environment variable: key=AWS_SSH_KEY_NAME, value=cluster-api-provider-aws-sigs-k8s-io
[4] STEP: Setting environment variable: key=AWS_B64ENCODED_CREDENTIALS, value=*******
[5] STEP: Node 5 acquiring resources: {ec2-normal:0, vpc:1, eip:3, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0}
[4] 
[4] JUnit report was created: /logs/artifacts/junit.e2e_suite.4.xml
[4] 
[4] Ran 0 of 0 Specs in 316.009 seconds
[4] SUCCESS! -- 0 Passed | 0 Failed | 0 Pending | 0 Skipped
[4] PASS | FOCUSED
[6] STEP: Setting environment variable: key=AWS_AVAILABILITY_ZONE_1, value=us-west-2a
[6] STEP: Setting environment variable: key=AWS_AVAILABILITY_ZONE_2, value=us-west-2b
[6] STEP: Setting environment variable: key=AWS_REGION, value=us-west-2
[6] STEP: Setting environment variable: key=AWS_SSH_KEY_NAME, value=cluster-api-provider-aws-sigs-k8s-io
[6] STEP: Setting environment variable: key=AWS_B64ENCODED_CREDENTIALS, value=*******
[6] 
[6] JUnit report was created: /logs/artifacts/junit.e2e_suite.6.xml
[6] 
[6] Ran 0 of 0 Specs in 315.999 seconds
[6] SUCCESS! -- 0 Passed | 0 Failed | 0 Pending | 0 Skipped
[6] PASS | FOCUSED
[2] STEP: Setting environment variable: key=AWS_AVAILABILITY_ZONE_1, value=us-west-2a
[2] STEP: Setting environment variable: key=AWS_AVAILABILITY_ZONE_2, value=us-west-2b
[2] STEP: Setting environment variable: key=AWS_REGION, value=us-west-2
[2] STEP: Setting environment variable: key=AWS_SSH_KEY_NAME, value=cluster-api-provider-aws-sigs-k8s-io
[2] STEP: Setting environment variable: key=AWS_B64ENCODED_CREDENTIALS, value=*******
[2] 
[2] JUnit report was created: /logs/artifacts/junit.e2e_suite.2.xml
[2] 
[2] Ran 0 of 0 Specs in 316.007 seconds
[2] SUCCESS! -- 0 Passed | 0 Failed | 0 Pending | 0 Skipped
[2] PASS | FOCUSED
[7] STEP: Setting environment variable: key=AWS_AVAILABILITY_ZONE_1, value=us-west-2a
[7] STEP: Setting environment variable: key=AWS_AVAILABILITY_ZONE_2, value=us-west-2b
[7] STEP: Setting environment variable: key=AWS_REGION, value=us-west-2
[7] STEP: Setting environment variable: key=AWS_SSH_KEY_NAME, value=cluster-api-provider-aws-sigs-k8s-io
[7] STEP: Setting environment variable: key=AWS_B64ENCODED_CREDENTIALS, value=*******
[7] 
[7] JUnit report was created: /logs/artifacts/junit.e2e_suite.7.xml
[7] 
[7] Ran 0 of 0 Specs in 316.055 seconds
[7] SUCCESS! -- 0 Passed | 0 Failed | 0 Pending | 0 Skipped
[7] PASS | FOCUSED
[5] STEP: Node 5 acquired resources: {ec2-normal:0, vpc:1, eip:3, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0}
[5] [BeforeEach] Machine Pool Spec
[5]   /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.5/e2e/machine_pool.go:61
[5] STEP: Creating a namespace for hosting the "machine-pool" test spec
[5] INFO: Creating namespace machine-pool-9pflwg
... skipping 59 lines ...
[5]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.5/e2e/machine_pool.go:76
[5] ------------------------------
[5] 
[5] JUnit report was created: /logs/artifacts/junit.e2e_suite.5.xml
[5] 
[5] Ran 1 of 5 Specs in 1553.766 seconds
[5] SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 4 Skipped
[5] PASS | FOCUSED
[1] SSSSSSSSSSSSSSSSSSSSSSSfolder created for eks clusters: /logs/artifacts/clusters/bootstrap/aws-resources
[1] STEP: Tearing down the management cluster
[1] STEP: Deleting cluster-api-provider-aws-sigs-k8s-io CloudFormation stack
[1] 
[1] JUnit report was created: /logs/artifacts/junit.e2e_suite.1.xml
[1] 
[1] Ran 0 of 23 Specs in 1602.840 seconds
[1] SUCCESS! -- 0 Passed | 0 Failed | 0 Pending | 23 Skipped
[1] PASS | FOCUSED

Ginkgo ran 1 suite in 28m40.354947287s
Test Suite Passed
Detected Programmatic Focus - setting exit status to 197

real	28m40.367s
user	12m36.652s
sys	2m53.620s
make: *** [Makefile:409: test-e2e] Error 197
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
... skipping 3 lines ...