This job view page is being replaced by Spyglass soon. Check out the new job view.
PRsbueringer: 🌱 Fix upgrade test (CoreDNS verification)
ResultFAILURE
Tests 1 failed / 0 succeeded
Started2021-04-13 11:15
Elapsed59m29s
Revision07bb22c23ebc89951d5fa915df65e8e81dbc5571
Refs 4470

Test Failures


capi-e2e When upgrading a workload cluster and testing K8S conformance [Conformance] [K8s-Upgrade] Should create and upgrade a workload cluster and run kubetest 39m31s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capi\-e2e\sWhen\supgrading\sa\sworkload\scluster\sand\stesting\sK8S\sconformance\s\[Conformance\]\s\[K8s\-Upgrade\]\sShould\screate\sand\supgrade\sa\sworkload\scluster\sand\srun\skubetest$'
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:82
Failed to run Kubernetes conformance
Unexpected error:
    <*errors.withStack | 0xc00061c0c0>: {
        error: <*errors.withMessage | 0xc0009f2060>{
            cause: <*exec.ExitError | 0xc0009f2000>{
                ProcessState: {
                    pid: 141585,
                    status: 256,
                    rusage: {
                        Utime: {Sec: 0, Usec: 116325},
                        Stime: {Sec: 0, Usec: 186217},
                        Maxrss: 56176,
                        Ixrss: 0,
                        Idrss: 0,
                        Isrss: 0,
                        Minflt: 7508,
                        Majflt: 21,
                        Nswap: 0,
                        Inblock: 42248,
                        Oublock: 0,
                        Msgsnd: 0,
                        Msgrcv: 0,
                        Nsignals: 0,
                        Nvcsw: 3772,
                        Nivcsw: 30,
                    },
                },
                Stderr: nil,
            },
            msg: "Unable to run conformance tests",
        },
        stack: [0x197e6ce, 0x19d1ab8, 0x77ae63, 0x77aa7c, 0x77a047, 0x780b6f, 0x780212, 0x7a8c71, 0x7a8787, 0x7a7f77, 0x7aa686, 0x7b7c98, 0x7b79ed, 0x19cdd56, 0x5267ef, 0x4743a1],
    }
    Unable to run conformance tests: exit status 1
occurred
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:153
				
				Click to see stdout/stderrfrom junit.e2e_suite.2.xml

Filter through log files | View test history on testgrid


Show 12 Skipped Tests

Error lines from build-log.txt

... skipping 593 lines ...
+ retVal=0
++ docker images -q kindest/node:v1.22.0-alpha.0.338_1fa101d1a4219e
+ [[ '' == '' ]]
+ echo '+ Pulling kindest/node:v1.22.0-alpha.0.338_1fa101d1a4219e'
+ Pulling kindest/node:v1.22.0-alpha.0.338_1fa101d1a4219e
+ docker pull kindest/node:v1.22.0-alpha.0.338_1fa101d1a4219e
Error response from daemon: manifest for kindest/node:v1.22.0-alpha.0.338_1fa101d1a4219e not found: manifest unknown: manifest unknown
+ retVal=1
+ [[ 1 != 0 ]]
+ echo '+ image for Kuberentes v1.22.0-alpha.0.338+1fa101d1a4219e is not available in docker hub, trying local build'
+ image for Kuberentes v1.22.0-alpha.0.338+1fa101d1a4219e is not available in docker hub, trying local build
+ kind::buildNodeImage v1.22.0-alpha.0.338+1fa101d1a4219e
+ local version=v1.22.0-alpha.0.338+1fa101d1a4219e
... skipping 157 lines ...
+ retVal=0
++ docker images -q kindest/node:v1.21.0
+ [[ '' == '' ]]
+ echo '+ Pulling kindest/node:v1.21.0'
+ Pulling kindest/node:v1.21.0
+ docker pull kindest/node:v1.21.0
Error response from daemon: manifest for kindest/node:v1.21.0 not found: manifest unknown: manifest unknown
+ retVal=1
+ [[ 1 != 0 ]]
+ echo '+ image for Kuberentes v1.21.0 is not available in docker hub, trying local build'
+ image for Kuberentes v1.21.0 is not available in docker hub, trying local build
+ kind::buildNodeImage v1.21.0
+ local version=v1.21.0
... skipping 462 lines ...
Apr 13 11:43:39.950: INFO: >>> kubeConfig: /tmp/kubeconfig
Apr 13 11:43:39.954: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Apr 13 11:43:39.969: INFO: Condition Ready of node k8s-upgrade-and-conformance-wc7vuo-worker-swuxae is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready  NoSchedule <nil>} {node.kubernetes.io/not-ready  NoExecute 2021-04-13 11:43:33 +0000 UTC}]. Failure
Apr 13 11:43:39.969: INFO: Condition Ready of node k8s-upgrade-and-conformance-wc7vuo-worker-88nfls is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status.
Apr 13 11:43:39.969: INFO: Condition Ready of node k8s-upgrade-and-conformance-wc7vuo-worker-tz3vjt is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status.
Apr 13 11:43:39.969: INFO: Condition Ready of node k8s-upgrade-and-conformance-wc7vuo-worker-tuqgmu is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status.
Apr 13 11:43:39.969: INFO: Condition Ready of node k8s-upgrade-and-conformance-wc7vuo-worker-9sjn00 is false instead of true. Reason: KubeletNotReady, message: [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, CSINode is not yet initialized]

Apr 13 11:43:39.969: INFO: Unschedulable nodes= 5, maximum value for starting tests= 0
Apr 13 11:43:39.970: INFO: 	-> Node k8s-upgrade-and-conformance-wc7vuo-worker-swuxae [[[ Ready=false, Network(available)=false, Taints=[{node.kubernetes.io/not-ready  NoSchedule <nil>} {node.kubernetes.io/not-ready  NoExecute 2021-04-13 11:43:33 +0000 UTC}], NonblockingTaints=node-role.kubernetes.io/master ]]]
Apr 13 11:43:39.970: INFO: 	-> Node k8s-upgrade-and-conformance-wc7vuo-worker-88nfls [[[ Ready=false, Network(available)=false, Taints=[{node.kubernetes.io/unreachable  NoSchedule 2021-04-13 11:43:38 +0000 UTC}], NonblockingTaints=node-role.kubernetes.io/master ]]]
Apr 13 11:43:39.970: INFO: 	-> Node k8s-upgrade-and-conformance-wc7vuo-worker-tz3vjt [[[ Ready=false, Network(available)=false, Taints=[{node.kubernetes.io/unreachable  NoSchedule 2021-04-13 11:43:38 +0000 UTC}], NonblockingTaints=node-role.kubernetes.io/master ]]]
Apr 13 11:43:39.970: INFO: 	-> Node k8s-upgrade-and-conformance-wc7vuo-worker-tuqgmu [[[ Ready=false, Network(available)=false, Taints=[{node.kubernetes.io/unreachable  NoSchedule 2021-04-13 11:43:38 +0000 UTC}], NonblockingTaints=node-role.kubernetes.io/master ]]]
Apr 13 11:43:39.970: INFO: 	-> Node k8s-upgrade-and-conformance-wc7vuo-worker-9sjn00 [[[ Ready=false, Network(available)=false, Taints=[{node.kubernetes.io/not-ready  NoSchedule <nil>}], NonblockingTaints=node-role.kubernetes.io/master ]]]
... skipping 483 lines ...
Apr 13 12:13:39.985: INFO: Condition Ready of node k8s-upgrade-and-conformance-wc7vuo-worker-tuqgmu is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2021-04-13 11:43:38 +0000 UTC} {node.kubernetes.io/unreachable  NoExecute 2021-04-13 11:44:09 +0000 UTC}]. Failure
Apr 13 12:13:39.985: INFO: Unschedulable nodes= 3, maximum value for starting tests= 0
Apr 13 12:13:39.985: INFO: 	-> Node k8s-upgrade-and-conformance-wc7vuo-worker-88nfls [[[ Ready=false, Network(available)=false, Taints=[{node.kubernetes.io/unreachable  NoSchedule 2021-04-13 11:43:38 +0000 UTC} {node.kubernetes.io/unreachable  NoExecute 2021-04-13 11:44:19 +0000 UTC}], NonblockingTaints=node-role.kubernetes.io/master ]]]
Apr 13 12:13:39.985: INFO: 	-> Node k8s-upgrade-and-conformance-wc7vuo-worker-tz3vjt [[[ Ready=false, Network(available)=false, Taints=[{node.kubernetes.io/unreachable  NoSchedule 2021-04-13 11:43:38 +0000 UTC} {node.kubernetes.io/unreachable  NoExecute 2021-04-13 11:43:59 +0000 UTC}], NonblockingTaints=node-role.kubernetes.io/master ]]]
Apr 13 12:13:39.985: INFO: 	-> Node k8s-upgrade-and-conformance-wc7vuo-worker-tuqgmu [[[ Ready=false, Network(available)=false, Taints=[{node.kubernetes.io/unreachable  NoSchedule 2021-04-13 11:43:38 +0000 UTC} {node.kubernetes.io/unreachable  NoExecute 2021-04-13 11:44:09 +0000 UTC}], NonblockingTaints=node-role.kubernetes.io/master ]]]
Apr 13 12:13:39.985: INFO: ==== node wait: 7 out of 10 nodes are ready, max notReady allowed 0.  Need 3 more before starting.
Apr 13 12:13:39.985: FAIL: Unexpected error:

    <*errors.errorString | 0xc000240240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 17 lines ...


Failure [1800.038 seconds]
[BeforeSuite] BeforeSuite 
_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:76

  Apr 13 12:13:39.985: Unexpected error:

      <*errors.errorString | 0xc000240240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:264
------------------------------
Failure [1800.141 seconds]
[BeforeSuite] BeforeSuite 
_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:76

  BeforeSuite on Node 1 failed


  _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:76
------------------------------
Failure [1800.242 seconds]
[BeforeSuite] BeforeSuite 
_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:76

  BeforeSuite on Node 1 failed


  _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:76
------------------------------
Apr 13 12:13:40.033: INFO: Running AfterSuite actions on all nodes


... skipping 3 lines ...
Apr 13 12:13:39.994: INFO: Running AfterSuite actions on all nodes
Apr 13 12:13:40.096: INFO: Running AfterSuite actions on node 1
Apr 13 12:13:40.097: INFO: Skipping dumping logs from cluster


Ran 17313 of 0 Specs in 1800.310 seconds
FAIL! -- 0 Passed | 17313 Failed | 0 Pending | 0 Skipped



Ginkgo ran 1 suite in 30m2.851516911s
Test Suite Failed

STEP: Dumping logs from the "k8s-upgrade-and-conformance-wc7vuo" workload cluster
STEP: Dumping all the Cluster API resources in the "k8s-upgrade-and-conformance-rekfp7" namespace
STEP: Deleting cluster k8s-upgrade-and-conformance-rekfp7/k8s-upgrade-and-conformance-wc7vuo
STEP: Deleting cluster k8s-upgrade-and-conformance-wc7vuo
INFO: Waiting for the Cluster k8s-upgrade-and-conformance-rekfp7/k8s-upgrade-and-conformance-wc7vuo to be deleted
STEP: Waiting for cluster k8s-upgrade-and-conformance-wc7vuo to be deleted
... skipping 4 lines ...
• Failure [2371.595 seconds]
When upgrading a workload cluster and testing K8S conformance [Conformance] [K8s-Upgrade]
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade_test.go:27
  Should create and upgrade a workload cluster and run kubetest [It]
  /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:82

  Failed to run Kubernetes conformance
  Unexpected error:
      <*errors.withStack | 0xc00061c0c0>: {
          error: <*errors.withMessage | 0xc0009f2060>{
              cause: <*exec.ExitError | 0xc0009f2000>{
                  ProcessState: {
                      pid: 141585,
                      status: 256,
                      rusage: {
                          Utime: {Sec: 0, Usec: 116325},
... skipping 60 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] When upgrading a workload cluster and testing K8S conformance [Conformance] [K8s-Upgrade] [It] Should create and upgrade a workload cluster and run kubetest 
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:153

Ran 1 of 13 Specs in 2517.456 seconds
FAIL! -- 0 Passed | 1 Failed | 0 Pending | 12 Skipped


Ginkgo ran 1 suite in 43m3.668936582s
Test Suite Failed
make: *** [Makefile:92: run] Error 1
make: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e'
+ cleanup
++ pgrep -f 'docker events'
+ kill 89998
++ pgrep -f 'ctr -n moby events'
+ kill 89999
... skipping 24 lines ...