This job view page is being replaced by Spyglass soon. Check out the new job view.
PRAnkitasw: Update CAPI to v1.2.2
ResultFAILURE
Tests 1 failed / 25 succeeded
Started2022-09-24 03:43
Elapsed1h42m
Revision6baacaad810ce3dcd87bb38912e81cdde6333dd2
Refs 3739

Test Failures


capa-e2e [unmanaged] [functional] GPU-enabled cluster test should create cluster with single worker 16m14s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capa\-e2e\s\[unmanaged\]\s\[functional\]\sGPU\-enabled\scluster\stest\sshould\screate\scluster\swith\ssingle\sworker$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:116
Timed out after 600.069s.
Job default/cuda-vector-add failed
Job:
{
  "metadata": {
    "name": "cuda-vector-add",
    "namespace": "default",
    "uid": "da6e733e-ab6d-48c8-81c3-7c051038053d",
    "resourceVersion": "736",
    "generation": 1,
    "creationTimestamp": "2022-09-24T04:41:58Z",
    "labels": {
      "controller-uid": "da6e733e-ab6d-48c8-81c3-7c051038053d",
      "job-name": "cuda-vector-add"
    },
    "managedFields": [
      {
        "manager": "cluster-api-e2e",
        "operation": "Update",
        "apiVersion": "batch/v1",
        "time": "2022-09-24T04:41:58Z",
        "fieldsType": "FieldsV1",
        "fieldsV1": {
          "f:spec": {
            "f:backoffLimit": {},
            "f:completionMode": {},
            "f:completions": {},
            "f:parallelism": {},
            "f:suspend": {},
            "f:template": {
              "f:spec": {
                "f:containers": {
                  "k:{\"name\":\"cuda-vector-add\"}": {
                    ".": {},
                    "f:image": {},
                    "f:imagePullPolicy": {},
                    "f:name": {},
                    "f:resources": {
                      ".": {},
                      "f:limits": {
                        ".": {},
                        "f:nvidia.com/gpu": {}
                      }
                    },
                    "f:terminationMessagePath": {},
                    "f:terminationMessagePolicy": {}
                  }
                },
                "f:dnsPolicy": {},
                "f:restartPolicy": {},
                "f:schedulerName": {},
                "f:securityContext": {},
                "f:terminationGracePeriodSeconds": {}
              }
            }
          }
        }
      },
      {
        "manager": "kube-controller-manager",
        "operation": "Update",
        "apiVersion": "batch/v1",
        "time": "2022-09-24T04:41:58Z",
        "fieldsType": "FieldsV1",
        "fieldsV1": {
          "f:status": {
            "f:active": {},
            "f:ready": {},
            "f:startTime": {}
          }
        },
        "subresource": "status"
      }
    ]
  },
  "spec": {
    "parallelism": 1,
    "completions": 1,
    "backoffLimit": 6,
    "selector": {
      "matchLabels": {
        "controller-uid": "da6e733e-ab6d-48c8-81c3-7c051038053d"
      }
    },
    "template": {
      "metadata": {
        "creationTimestamp": null,
        "labels": {
          "controller-uid": "da6e733e-ab6d-48c8-81c3-7c051038053d",
          "job-name": "cuda-vector-add"
        }
      },
      "spec": {
        "containers": [
          {
            "name": "cuda-vector-add",
            "image": "nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda11.1-ubuntu18.04",
            "resources": {
              "limits": {
                "nvidia.com/gpu": "1"
              }
            },
            "terminationMessagePath": "/dev/termination-log",
            "terminationMessagePolicy": "File",
            "imagePullPolicy": "IfNotPresent"
          }
        ],
        "restartPolicy": "OnFailure",
        "terminationGracePeriodSeconds": 30,
        "dnsPolicy": "ClusterFirst",
        "securityContext": {},
        "schedulerName": "default-scheduler"
      }
    },
    "completionMode": "NonIndexed",
    "suspend": false
  },
  "status": {
    "startTime": "2022-09-24T04:41:58Z",
    "active": 1,
    "ready": 0
  }
}
LAST SEEN                      TYPE    REASON            OBJECT               MESSAGE
2022-09-24 04:41:58 +0000 UTC  Normal  SuccessfulCreate  job/cuda-vector-add  Created pod: cuda-vector-add-jdtzr

Expected
    <bool>: false
to be true
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/gpu.go:134
				
				Click to see stdout/stderrfrom junit.e2e_suite.7.xml

Filter through log files | View test history on testgrid


Show 25 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 2147 lines ...
[7]   GPU-enabled cluster test
[7]   /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:115
[7]     should create cluster with single worker [It]
[7]     /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:116
[7] 
[7]     Timed out after 600.069s.
[7]     Job default/cuda-vector-add failed
[7]     Job:
[7]     {
[7]       "metadata": {
[7]         "name": "cuda-vector-add",
[7]         "namespace": "default",
[7]         "uid": "da6e733e-ab6d-48c8-81c3-7c051038053d",
... skipping 569 lines ...
[5]     /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.2/e2e/cluster_upgrade.go:118
[5] ------------------------------
[5] 
[5] JUnit report was created: /logs/artifacts/junit.e2e_suite.5.xml
[5] 
[5] Ran 2 of 2 Specs in 4668.907 seconds
[5] SUCCESS! -- 2 Passed | 0 Failed | 0 Pending | 0 Skipped
[5] PASS
[1] INFO: Waiting for control plane to be ready
[1] INFO: Waiting for control plane functional-test-spot-instances-qyjh06/functional-test-spot-instances-cfvfyt-control-plane to be ready (implies underlying nodes to be ready as well)
[1] STEP: Waiting for the control plane to be ready
[1] STEP: Checking all the the control plane machines are in the expected failure domains
[1] INFO: Waiting for the machine deployments to be provisioned
... skipping 37 lines ...
[4] STEP: Deleting namespace used for hosting the "" test spec
[4] INFO: Deleting namespace functional-efs-support-75kk91
[4] 
[4] JUnit report was created: /logs/artifacts/junit.e2e_suite.4.xml
[4] 
[4] Ran 4 of 4 Specs in 4803.375 seconds
[4] SUCCESS! -- 4 Passed | 0 Failed | 0 Pending | 0 Skipped
[4] PASS
[3] STEP: Checking all the machines controlled by functional-test-ignition-l9f0as-md-0 are in the "<None>" failure domain
[3] INFO: Waiting for the machine pools to be provisioned
[3] 
[3] • [SLOW TEST:361.571 seconds]
[3] [unmanaged] [functional]
... skipping 63 lines ...
[3] STEP: Deleting namespace used for hosting the "" test spec
[3] INFO: Deleting namespace functional-test-ignition-png13f
[3] 
[3] JUnit report was created: /logs/artifacts/junit.e2e_suite.3.xml
[3] 
[3] Ran 4 of 6 Specs in 5077.849 seconds
[3] SUCCESS! -- 4 Passed | 0 Failed | 2 Pending | 0 Skipped
[3] PASS
[1] STEP: Deleting namespace used for hosting the "" test spec
[1] INFO: Deleting namespace functional-test-spot-instances-qyjh06
[1] STEP: Node 1 released resources: {ec2-normal:4, vpc:1, eip:3, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0}
[1] 
[1] • [SLOW TEST:778.655 seconds]
... skipping 17 lines ...
[2]     /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_clusterclass_test.go:54
[2] ------------------------------
[2] 
[2] JUnit report was created: /logs/artifacts/junit.e2e_suite.2.xml
[2] 
[2] Ran 5 of 5 Specs in 5330.191 seconds
[2] SUCCESS! -- 5 Passed | 0 Failed | 0 Pending | 0 Skipped
[2] PASS
[7] STEP: Deleting namespace used for hosting the "" test spec
[7] INFO: Deleting namespace functional-gpu-cluster-i1sl39
[7] 
[7] JUnit report was created: /logs/artifacts/junit.e2e_suite.7.xml
[7] 
[7] 
[7] Summarizing 1 Failure:
[7] 
[7] [Fail] [unmanaged] [functional] GPU-enabled cluster test [It] should create cluster with single worker 
[7] /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/gpu.go:134
[7] 
[7] Ran 3 of 3 Specs in 5382.900 seconds
[7] FAIL! -- 2 Passed | 1 Failed | 0 Pending | 0 Skipped
[7] --- FAIL: TestE2E (5382.94s)
[7] FAIL
[6] STEP: Deleting namespace used for hosting the "" test spec
[6] INFO: Deleting namespace functional-test-ssm-parameter-store-clusterclass-6i8ptx
[6] STEP: Node 6 released resources: {ec2-normal:0, vpc:1, eip:3, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0}
[6] 
[6] • [SLOW TEST:806.962 seconds]
[6] [unmanaged] [functional] [ClusterClass]
... skipping 4 lines ...
[6]     /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_clusterclass_test.go:96
[6] ------------------------------
[6] 
[6] JUnit report was created: /logs/artifacts/junit.e2e_suite.6.xml
[6] 
[6] Ran 3 of 3 Specs in 5463.383 seconds
[6] SUCCESS! -- 3 Passed | 0 Failed | 0 Pending | 0 Skipped
[6] PASS
[1] folder created for eks clusters: /logs/artifacts/clusters/bootstrap/aws-resources
[1] STEP: Tearing down the management cluster
[1] STEP: Deleting cluster-api-provider-aws-sigs-k8s-io CloudFormation stack
[1] 
[1] JUnit report was created: /logs/artifacts/junit.e2e_suite.1.xml
[1] 
[1] Ran 5 of 7 Specs in 5893.531 seconds
[1] SUCCESS! -- 5 Passed | 0 Failed | 2 Pending | 0 Skipped
[1] PASS

Ginkgo ran 1 suite in 1h39m52.598212948s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 3 lines ...
To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc

real	99m52.608s
user	22m9.306s
sys	6m6.814s
make: *** [Makefile:401: test-e2e] Error 1
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
... skipping 3 lines ...