This job view page is being replaced by Spyglass soon. Check out the new job view.
PRoscr: ⚠️ Use Kubernetes 1.25 in Quick Start docs and CAPD.
ResultFAILURE
Tests 0 failed / 7 succeeded
Started2022-09-02 11:26
Elapsed1h6m
Revision80e49ff8f61df4e7254be88ee5caf51d61acba1f
Refs 7156

No Test Failures!


Show 7 Passed Tests

Show 20 Skipped Tests

Error lines from build-log.txt

... skipping 908 lines ...
Status: Downloaded newer image for quay.io/jetstack/cert-manager-controller:v1.9.1
quay.io/jetstack/cert-manager-controller:v1.9.1
+ export GINKGO_NODES=3
+ GINKGO_NODES=3
+ export GINKGO_NOCOLOR=true
+ GINKGO_NOCOLOR=true
+ export GINKGO_ARGS=--fail-fast
+ GINKGO_ARGS=--fail-fast
+ export E2E_CONF_FILE=/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker.yaml
+ E2E_CONF_FILE=/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker.yaml
+ export ARTIFACTS=/logs/artifacts
+ ARTIFACTS=/logs/artifacts
+ export SKIP_RESOURCE_CLEANUP=false
+ SKIP_RESOURCE_CLEANUP=false
... skipping 79 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-kcp-scale-in --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-kcp-scale-in.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ipv6 --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ipv6.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-topology --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-topology.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ignition --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ignition.yaml
mkdir -p /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/test-extension
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/extension/config/default > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/test-extension/deployment.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/ginkgo-v2.1.4 -v --trace --tags=e2e --focus="\[K8s-Upgrade\]"  --nodes=3 --no-color=true --output-dir="/logs/artifacts" --junit-report="junit.e2e_suite.1.xml" --fail-fast . -- \
    -e2e.artifacts-folder="/logs/artifacts" \
    -e2e.config="/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker.yaml" \
    -e2e.skip-resource-cleanup=false -e2e.use-existing-cluster=false
go: downloading k8s.io/apimachinery v0.24.2
go: downloading github.com/blang/semver v3.5.1+incompatible
go: downloading github.com/onsi/gomega v1.20.0
... skipping 229 lines ...
    kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-rxa2hz-mp-0-config created
    kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-rxa2hz-mp-0-config-cgroupfs created
    cluster.cluster.x-k8s.io/k8s-upgrade-and-conformance-rxa2hz created
    machinepool.cluster.x-k8s.io/k8s-upgrade-and-conformance-rxa2hz-mp-0 created
    dockermachinepool.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-rxa2hz-dmp-0 created

    Failed to get logs for Machine k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-7zrh5, Cluster k8s-upgrade-and-conformance-7hitst/k8s-upgrade-and-conformance-rxa2hz: exit status 2
    Failed to get logs for Machine k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-j45dw, Cluster k8s-upgrade-and-conformance-7hitst/k8s-upgrade-and-conformance-rxa2hz: exit status 2
    Failed to get logs for Machine k8s-upgrade-and-conformance-rxa2hz-psptv-vbzpk, Cluster k8s-upgrade-and-conformance-7hitst/k8s-upgrade-and-conformance-rxa2hz: exit status 2
    Failed to get logs for MachinePool k8s-upgrade-and-conformance-rxa2hz-mp-0, Cluster k8s-upgrade-and-conformance-7hitst/k8s-upgrade-and-conformance-rxa2hz: exit status 2
  << End Captured StdOut/StdErr Output

  Begin Captured GinkgoWriter Output >>
    STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec 09/02/22 11:35:33.908
    INFO: Creating namespace k8s-upgrade-and-conformance-7hitst
    INFO: Creating event watcher for namespace "k8s-upgrade-and-conformance-7hitst"
... skipping 41 lines ...
    
    Running in parallel across 4 nodes
    
    Sep  2 11:42:19.731: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 11:42:19.735: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
    Sep  2 11:42:19.756: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
    Sep  2 11:42:19.806: INFO: The status of Pod coredns-78fcd69978-92bmt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:19.806: INFO: The status of Pod coredns-78fcd69978-k7qqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:19.806: INFO: The status of Pod kindnet-9kw4h is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:19.806: INFO: The status of Pod kindnet-lh75c is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:19.806: INFO: The status of Pod kube-proxy-8xv5j is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:19.806: INFO: The status of Pod kube-proxy-cqrjz is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:19.806: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
    Sep  2 11:42:19.806: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep  2 11:42:19.806: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep  2 11:42:19.806: INFO: coredns-78fcd69978-92bmt  k8s-upgrade-and-conformance-rxa2hz-worker-md70aw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:33 +0000 UTC  }]
    Sep  2 11:42:19.806: INFO: coredns-78fcd69978-k7qqx  k8s-upgrade-and-conformance-rxa2hz-worker-i35j7o  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:39:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:39:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:39:51 +0000 UTC  }]
    Sep  2 11:42:19.806: INFO: kindnet-9kw4h             k8s-upgrade-and-conformance-rxa2hz-worker-md70aw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:20 +0000 UTC  }]
    Sep  2 11:42:19.806: INFO: kindnet-lh75c             k8s-upgrade-and-conformance-rxa2hz-worker-i35j7o  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:47 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:37 +0000 UTC  }]
    Sep  2 11:42:19.806: INFO: kube-proxy-8xv5j          k8s-upgrade-and-conformance-rxa2hz-worker-i35j7o  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:52 +0000 UTC  }]
    Sep  2 11:42:19.806: INFO: kube-proxy-cqrjz          k8s-upgrade-and-conformance-rxa2hz-worker-md70aw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:42 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:39 +0000 UTC  }]
    Sep  2 11:42:19.806: INFO: 
    Sep  2 11:42:21.832: INFO: The status of Pod coredns-78fcd69978-92bmt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:21.832: INFO: The status of Pod coredns-78fcd69978-k7qqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:21.832: INFO: The status of Pod kindnet-9kw4h is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:21.832: INFO: The status of Pod kindnet-lh75c is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:21.832: INFO: The status of Pod kube-proxy-8xv5j is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:21.832: INFO: The status of Pod kube-proxy-cqrjz is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:21.832: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (2 seconds elapsed)
    Sep  2 11:42:21.832: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep  2 11:42:21.832: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep  2 11:42:21.832: INFO: coredns-78fcd69978-92bmt  k8s-upgrade-and-conformance-rxa2hz-worker-md70aw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:33 +0000 UTC  }]
    Sep  2 11:42:21.832: INFO: coredns-78fcd69978-k7qqx  k8s-upgrade-and-conformance-rxa2hz-worker-i35j7o  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:39:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:39:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:39:51 +0000 UTC  }]
    Sep  2 11:42:21.832: INFO: kindnet-9kw4h             k8s-upgrade-and-conformance-rxa2hz-worker-md70aw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:20 +0000 UTC  }]
    Sep  2 11:42:21.832: INFO: kindnet-lh75c             k8s-upgrade-and-conformance-rxa2hz-worker-i35j7o  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:47 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:37 +0000 UTC  }]
    Sep  2 11:42:21.832: INFO: kube-proxy-8xv5j          k8s-upgrade-and-conformance-rxa2hz-worker-i35j7o  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:52 +0000 UTC  }]
    Sep  2 11:42:21.832: INFO: kube-proxy-cqrjz          k8s-upgrade-and-conformance-rxa2hz-worker-md70aw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:42 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:39 +0000 UTC  }]
    Sep  2 11:42:21.832: INFO: 
    Sep  2 11:42:23.831: INFO: The status of Pod coredns-78fcd69978-92bmt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:23.831: INFO: The status of Pod coredns-78fcd69978-k7qqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:23.831: INFO: The status of Pod kindnet-9kw4h is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:23.831: INFO: The status of Pod kindnet-lh75c is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:23.831: INFO: The status of Pod kube-proxy-8xv5j is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:23.831: INFO: The status of Pod kube-proxy-cqrjz is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:23.831: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (4 seconds elapsed)
    Sep  2 11:42:23.831: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep  2 11:42:23.831: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep  2 11:42:23.831: INFO: coredns-78fcd69978-92bmt  k8s-upgrade-and-conformance-rxa2hz-worker-md70aw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:33 +0000 UTC  }]
    Sep  2 11:42:23.831: INFO: coredns-78fcd69978-k7qqx  k8s-upgrade-and-conformance-rxa2hz-worker-i35j7o  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:39:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:39:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:39:51 +0000 UTC  }]
    Sep  2 11:42:23.831: INFO: kindnet-9kw4h             k8s-upgrade-and-conformance-rxa2hz-worker-md70aw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:20 +0000 UTC  }]
    Sep  2 11:42:23.831: INFO: kindnet-lh75c             k8s-upgrade-and-conformance-rxa2hz-worker-i35j7o  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:47 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:37 +0000 UTC  }]
    Sep  2 11:42:23.831: INFO: kube-proxy-8xv5j          k8s-upgrade-and-conformance-rxa2hz-worker-i35j7o  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:52 +0000 UTC  }]
    Sep  2 11:42:23.831: INFO: kube-proxy-cqrjz          k8s-upgrade-and-conformance-rxa2hz-worker-md70aw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:42 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:39 +0000 UTC  }]
    Sep  2 11:42:23.831: INFO: 
    Sep  2 11:42:25.837: INFO: The status of Pod coredns-78fcd69978-92bmt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:25.837: INFO: The status of Pod coredns-78fcd69978-k7qqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:25.837: INFO: The status of Pod kindnet-9kw4h is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:25.837: INFO: The status of Pod kindnet-lh75c is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:25.837: INFO: The status of Pod kube-proxy-8xv5j is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:25.837: INFO: The status of Pod kube-proxy-cqrjz is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:25.837: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (6 seconds elapsed)
    Sep  2 11:42:25.837: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep  2 11:42:25.837: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep  2 11:42:25.837: INFO: coredns-78fcd69978-92bmt  k8s-upgrade-and-conformance-rxa2hz-worker-md70aw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:33 +0000 UTC  }]
    Sep  2 11:42:25.837: INFO: coredns-78fcd69978-k7qqx  k8s-upgrade-and-conformance-rxa2hz-worker-i35j7o  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:39:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:39:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:39:51 +0000 UTC  }]
    Sep  2 11:42:25.837: INFO: kindnet-9kw4h             k8s-upgrade-and-conformance-rxa2hz-worker-md70aw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:20 +0000 UTC  }]
    Sep  2 11:42:25.837: INFO: kindnet-lh75c             k8s-upgrade-and-conformance-rxa2hz-worker-i35j7o  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:47 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:37 +0000 UTC  }]
    Sep  2 11:42:25.837: INFO: kube-proxy-8xv5j          k8s-upgrade-and-conformance-rxa2hz-worker-i35j7o  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:52 +0000 UTC  }]
    Sep  2 11:42:25.837: INFO: kube-proxy-cqrjz          k8s-upgrade-and-conformance-rxa2hz-worker-md70aw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:42 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:39 +0000 UTC  }]
    Sep  2 11:42:25.837: INFO: 
    Sep  2 11:42:27.831: INFO: The status of Pod coredns-78fcd69978-92bmt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:27.831: INFO: The status of Pod coredns-78fcd69978-k7qqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:27.831: INFO: The status of Pod kindnet-9kw4h is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:27.831: INFO: The status of Pod kindnet-lh75c is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:27.831: INFO: The status of Pod kube-proxy-8xv5j is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:27.831: INFO: The status of Pod kube-proxy-cqrjz is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:27.831: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (8 seconds elapsed)
    Sep  2 11:42:27.831: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep  2 11:42:27.831: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep  2 11:42:27.831: INFO: coredns-78fcd69978-92bmt  k8s-upgrade-and-conformance-rxa2hz-worker-md70aw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:33 +0000 UTC  }]
    Sep  2 11:42:27.831: INFO: coredns-78fcd69978-k7qqx  k8s-upgrade-and-conformance-rxa2hz-worker-i35j7o  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:39:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:39:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:39:51 +0000 UTC  }]
    Sep  2 11:42:27.831: INFO: kindnet-9kw4h             k8s-upgrade-and-conformance-rxa2hz-worker-md70aw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:20 +0000 UTC  }]
    Sep  2 11:42:27.831: INFO: kindnet-lh75c             k8s-upgrade-and-conformance-rxa2hz-worker-i35j7o  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:47 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:37 +0000 UTC  }]
    Sep  2 11:42:27.831: INFO: kube-proxy-8xv5j          k8s-upgrade-and-conformance-rxa2hz-worker-i35j7o  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:52 +0000 UTC  }]
    Sep  2 11:42:27.831: INFO: kube-proxy-cqrjz          k8s-upgrade-and-conformance-rxa2hz-worker-md70aw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:42 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:39 +0000 UTC  }]
    Sep  2 11:42:27.831: INFO: 
    Sep  2 11:42:29.828: INFO: The status of Pod coredns-78fcd69978-92bmt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:29.828: INFO: The status of Pod coredns-78fcd69978-k7qqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:29.828: INFO: The status of Pod kindnet-9kw4h is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:29.828: INFO: The status of Pod kindnet-lh75c is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:29.828: INFO: The status of Pod kube-proxy-8xv5j is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:29.828: INFO: The status of Pod kube-proxy-cqrjz is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:29.828: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (10 seconds elapsed)
    Sep  2 11:42:29.828: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep  2 11:42:29.828: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep  2 11:42:29.828: INFO: coredns-78fcd69978-92bmt  k8s-upgrade-and-conformance-rxa2hz-worker-md70aw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:33 +0000 UTC  }]
    Sep  2 11:42:29.828: INFO: coredns-78fcd69978-k7qqx  k8s-upgrade-and-conformance-rxa2hz-worker-i35j7o  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:39:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:39:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:39:51 +0000 UTC  }]
    Sep  2 11:42:29.828: INFO: kindnet-9kw4h             k8s-upgrade-and-conformance-rxa2hz-worker-md70aw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:20 +0000 UTC  }]
    Sep  2 11:42:29.828: INFO: kindnet-lh75c             k8s-upgrade-and-conformance-rxa2hz-worker-i35j7o  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:47 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:37 +0000 UTC  }]
    Sep  2 11:42:29.828: INFO: kube-proxy-8xv5j          k8s-upgrade-and-conformance-rxa2hz-worker-i35j7o  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:52 +0000 UTC  }]
    Sep  2 11:42:29.828: INFO: kube-proxy-cqrjz          k8s-upgrade-and-conformance-rxa2hz-worker-md70aw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:42 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:39 +0000 UTC  }]
    Sep  2 11:42:29.828: INFO: 
    Sep  2 11:42:31.828: INFO: The status of Pod coredns-78fcd69978-92bmt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:31.828: INFO: The status of Pod coredns-78fcd69978-k7qqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:31.828: INFO: The status of Pod kindnet-9kw4h is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:31.828: INFO: The status of Pod kindnet-lh75c is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:31.828: INFO: The status of Pod kube-proxy-8xv5j is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:31.828: INFO: The status of Pod kube-proxy-cqrjz is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:31.828: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (12 seconds elapsed)
    Sep  2 11:42:31.828: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep  2 11:42:31.829: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep  2 11:42:31.829: INFO: coredns-78fcd69978-92bmt  k8s-upgrade-and-conformance-rxa2hz-worker-md70aw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:33 +0000 UTC  }]
    Sep  2 11:42:31.829: INFO: coredns-78fcd69978-k7qqx  k8s-upgrade-and-conformance-rxa2hz-worker-i35j7o  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:39:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:39:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:39:51 +0000 UTC  }]
    Sep  2 11:42:31.829: INFO: kindnet-9kw4h             k8s-upgrade-and-conformance-rxa2hz-worker-md70aw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:20 +0000 UTC  }]
    Sep  2 11:42:31.829: INFO: kindnet-lh75c             k8s-upgrade-and-conformance-rxa2hz-worker-i35j7o  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:47 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:37 +0000 UTC  }]
    Sep  2 11:42:31.829: INFO: kube-proxy-8xv5j          k8s-upgrade-and-conformance-rxa2hz-worker-i35j7o  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:52 +0000 UTC  }]
    Sep  2 11:42:31.829: INFO: kube-proxy-cqrjz          k8s-upgrade-and-conformance-rxa2hz-worker-md70aw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:42 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:39 +0000 UTC  }]
    Sep  2 11:42:31.829: INFO: 
    Sep  2 11:42:33.831: INFO: The status of Pod coredns-78fcd69978-92bmt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:33.831: INFO: The status of Pod coredns-78fcd69978-k7qqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:33.832: INFO: The status of Pod kindnet-9kw4h is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:33.832: INFO: The status of Pod kindnet-lh75c is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:33.832: INFO: The status of Pod kube-proxy-8xv5j is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:33.832: INFO: The status of Pod kube-proxy-cqrjz is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:33.832: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (14 seconds elapsed)
    Sep  2 11:42:33.832: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep  2 11:42:33.832: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep  2 11:42:33.832: INFO: coredns-78fcd69978-92bmt  k8s-upgrade-and-conformance-rxa2hz-worker-md70aw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:33 +0000 UTC  }]
    Sep  2 11:42:33.832: INFO: coredns-78fcd69978-k7qqx  k8s-upgrade-and-conformance-rxa2hz-worker-i35j7o  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:39:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:39:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:39:51 +0000 UTC  }]
    Sep  2 11:42:33.832: INFO: kindnet-9kw4h             k8s-upgrade-and-conformance-rxa2hz-worker-md70aw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:20 +0000 UTC  }]
    Sep  2 11:42:33.832: INFO: kindnet-lh75c             k8s-upgrade-and-conformance-rxa2hz-worker-i35j7o  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:47 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:37 +0000 UTC  }]
    Sep  2 11:42:33.832: INFO: kube-proxy-8xv5j          k8s-upgrade-and-conformance-rxa2hz-worker-i35j7o  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:52 +0000 UTC  }]
    Sep  2 11:42:33.832: INFO: kube-proxy-cqrjz          k8s-upgrade-and-conformance-rxa2hz-worker-md70aw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:42 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:39 +0000 UTC  }]
    Sep  2 11:42:33.832: INFO: 
    Sep  2 11:42:35.836: INFO: The status of Pod coredns-78fcd69978-92bmt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:35.836: INFO: The status of Pod coredns-78fcd69978-k7qqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:35.836: INFO: The status of Pod kindnet-9kw4h is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:35.836: INFO: The status of Pod kindnet-lh75c is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:35.836: INFO: The status of Pod kube-proxy-8xv5j is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:35.836: INFO: The status of Pod kube-proxy-cqrjz is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:35.836: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (16 seconds elapsed)
    Sep  2 11:42:35.836: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep  2 11:42:35.836: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep  2 11:42:35.836: INFO: coredns-78fcd69978-92bmt  k8s-upgrade-and-conformance-rxa2hz-worker-md70aw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:33 +0000 UTC  }]
    Sep  2 11:42:35.836: INFO: coredns-78fcd69978-k7qqx  k8s-upgrade-and-conformance-rxa2hz-worker-i35j7o  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:39:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:39:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:39:51 +0000 UTC  }]
    Sep  2 11:42:35.836: INFO: kindnet-9kw4h             k8s-upgrade-and-conformance-rxa2hz-worker-md70aw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:20 +0000 UTC  }]
    Sep  2 11:42:35.836: INFO: kindnet-lh75c             k8s-upgrade-and-conformance-rxa2hz-worker-i35j7o  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:47 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:37 +0000 UTC  }]
    Sep  2 11:42:35.836: INFO: kube-proxy-8xv5j          k8s-upgrade-and-conformance-rxa2hz-worker-i35j7o  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:52 +0000 UTC  }]
    Sep  2 11:42:35.836: INFO: kube-proxy-cqrjz          k8s-upgrade-and-conformance-rxa2hz-worker-md70aw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:42 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:39 +0000 UTC  }]
    Sep  2 11:42:35.836: INFO: 
    Sep  2 11:42:37.833: INFO: The status of Pod coredns-78fcd69978-92bmt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:37.833: INFO: The status of Pod coredns-78fcd69978-k7qqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:37.833: INFO: The status of Pod kindnet-9kw4h is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:37.833: INFO: The status of Pod kindnet-lh75c is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:37.833: INFO: The status of Pod kube-proxy-8xv5j is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:37.833: INFO: The status of Pod kube-proxy-cqrjz is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:37.833: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (18 seconds elapsed)
    Sep  2 11:42:37.833: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep  2 11:42:37.833: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep  2 11:42:37.833: INFO: coredns-78fcd69978-92bmt  k8s-upgrade-and-conformance-rxa2hz-worker-md70aw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:33 +0000 UTC  }]
    Sep  2 11:42:37.833: INFO: coredns-78fcd69978-k7qqx  k8s-upgrade-and-conformance-rxa2hz-worker-i35j7o  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:39:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:39:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:39:51 +0000 UTC  }]
    Sep  2 11:42:37.833: INFO: kindnet-9kw4h             k8s-upgrade-and-conformance-rxa2hz-worker-md70aw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:20 +0000 UTC  }]
    Sep  2 11:42:37.833: INFO: kindnet-lh75c             k8s-upgrade-and-conformance-rxa2hz-worker-i35j7o  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:47 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:37 +0000 UTC  }]
    Sep  2 11:42:37.833: INFO: kube-proxy-8xv5j          k8s-upgrade-and-conformance-rxa2hz-worker-i35j7o  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:52 +0000 UTC  }]
    Sep  2 11:42:37.833: INFO: kube-proxy-cqrjz          k8s-upgrade-and-conformance-rxa2hz-worker-md70aw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:42 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:39 +0000 UTC  }]
    Sep  2 11:42:37.833: INFO: 
    Sep  2 11:42:39.830: INFO: The status of Pod coredns-78fcd69978-92bmt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:39.830: INFO: The status of Pod coredns-78fcd69978-k7qqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:39.830: INFO: The status of Pod kindnet-9kw4h is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:39.830: INFO: The status of Pod kindnet-lh75c is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:39.830: INFO: The status of Pod kube-proxy-8xv5j is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:39.830: INFO: The status of Pod kube-proxy-cqrjz is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:39.830: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (20 seconds elapsed)
    Sep  2 11:42:39.830: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep  2 11:42:39.830: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep  2 11:42:39.830: INFO: coredns-78fcd69978-92bmt  k8s-upgrade-and-conformance-rxa2hz-worker-md70aw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:33 +0000 UTC  }]
    Sep  2 11:42:39.831: INFO: coredns-78fcd69978-k7qqx  k8s-upgrade-and-conformance-rxa2hz-worker-i35j7o  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:39:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:39:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:39:51 +0000 UTC  }]
    Sep  2 11:42:39.831: INFO: kindnet-9kw4h             k8s-upgrade-and-conformance-rxa2hz-worker-md70aw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:20 +0000 UTC  }]
    Sep  2 11:42:39.831: INFO: kindnet-lh75c             k8s-upgrade-and-conformance-rxa2hz-worker-i35j7o  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:47 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:37 +0000 UTC  }]
    Sep  2 11:42:39.831: INFO: kube-proxy-8xv5j          k8s-upgrade-and-conformance-rxa2hz-worker-i35j7o  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:52 +0000 UTC  }]
    Sep  2 11:42:39.831: INFO: kube-proxy-cqrjz          k8s-upgrade-and-conformance-rxa2hz-worker-md70aw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:42 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:39 +0000 UTC  }]
    Sep  2 11:42:39.831: INFO: 
    Sep  2 11:42:41.832: INFO: The status of Pod coredns-78fcd69978-92bmt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:41.832: INFO: The status of Pod coredns-78fcd69978-k7qqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:41.832: INFO: The status of Pod kindnet-9kw4h is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:41.832: INFO: The status of Pod kindnet-lh75c is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:41.832: INFO: The status of Pod kube-proxy-8xv5j is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:41.832: INFO: The status of Pod kube-proxy-cqrjz is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:41.832: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (22 seconds elapsed)
    Sep  2 11:42:41.832: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep  2 11:42:41.832: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep  2 11:42:41.832: INFO: coredns-78fcd69978-92bmt  k8s-upgrade-and-conformance-rxa2hz-worker-md70aw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:33 +0000 UTC  }]
    Sep  2 11:42:41.833: INFO: coredns-78fcd69978-k7qqx  k8s-upgrade-and-conformance-rxa2hz-worker-i35j7o  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:39:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:39:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:39:51 +0000 UTC  }]
    Sep  2 11:42:41.833: INFO: kindnet-9kw4h             k8s-upgrade-and-conformance-rxa2hz-worker-md70aw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:20 +0000 UTC  }]
    Sep  2 11:42:41.833: INFO: kindnet-lh75c             k8s-upgrade-and-conformance-rxa2hz-worker-i35j7o  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:47 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:37 +0000 UTC  }]
    Sep  2 11:42:41.833: INFO: kube-proxy-8xv5j          k8s-upgrade-and-conformance-rxa2hz-worker-i35j7o  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:52 +0000 UTC  }]
    Sep  2 11:42:41.833: INFO: kube-proxy-cqrjz          k8s-upgrade-and-conformance-rxa2hz-worker-md70aw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:42 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:39 +0000 UTC  }]
    Sep  2 11:42:41.833: INFO: 
    Sep  2 11:42:43.830: INFO: The status of Pod coredns-78fcd69978-92bmt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:43.830: INFO: The status of Pod coredns-78fcd69978-k7qqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:43.830: INFO: The status of Pod kindnet-9kw4h is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:43.830: INFO: The status of Pod kindnet-lh75c is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:43.830: INFO: The status of Pod kube-proxy-8xv5j is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:43.830: INFO: The status of Pod kube-proxy-cqrjz is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:43.830: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (24 seconds elapsed)
    Sep  2 11:42:43.830: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep  2 11:42:43.830: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep  2 11:42:43.830: INFO: coredns-78fcd69978-92bmt  k8s-upgrade-and-conformance-rxa2hz-worker-md70aw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:33 +0000 UTC  }]
    Sep  2 11:42:43.830: INFO: coredns-78fcd69978-k7qqx  k8s-upgrade-and-conformance-rxa2hz-worker-i35j7o  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:39:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:39:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:39:51 +0000 UTC  }]
    Sep  2 11:42:43.830: INFO: kindnet-9kw4h             k8s-upgrade-and-conformance-rxa2hz-worker-md70aw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:20 +0000 UTC  }]
    Sep  2 11:42:43.830: INFO: kindnet-lh75c             k8s-upgrade-and-conformance-rxa2hz-worker-i35j7o  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:47 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:37:37 +0000 UTC  }]
    Sep  2 11:42:43.830: INFO: kube-proxy-8xv5j          k8s-upgrade-and-conformance-rxa2hz-worker-i35j7o  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:52 +0000 UTC  }]
    Sep  2 11:42:43.830: INFO: kube-proxy-cqrjz          k8s-upgrade-and-conformance-rxa2hz-worker-md70aw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:42 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:40:39 +0000 UTC  }]
    Sep  2 11:42:43.830: INFO: 
    Sep  2 11:42:45.831: INFO: The status of Pod coredns-78fcd69978-lkh6b is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:45.831: INFO: The status of Pod coredns-78fcd69978-xrwb7 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 11:42:45.831: INFO: 14 / 16 pods in namespace 'kube-system' are running and ready (26 seconds elapsed)
    Sep  2 11:42:45.831: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep  2 11:42:45.831: INFO: POD                       NODE                                                           PHASE    GRACE  CONDITIONS
    Sep  2 11:42:45.831: INFO: coredns-78fcd69978-lkh6b  k8s-upgrade-and-conformance-rxa2hz-worker-k0ejuu               Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:42:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:42:45 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:42:45 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:42:45 +0000 UTC  }]
    Sep  2 11:42:45.831: INFO: coredns-78fcd69978-xrwb7  k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-j45dw  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:42:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:42:45 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:42:45 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 11:42:45 +0000 UTC  }]
    Sep  2 11:42:45.831: INFO: 
... skipping 44 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:42:48.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-8198" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    W0902 11:42:48.004002      19 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
    Sep  2 11:42:48.004: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide pod UID as env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep  2 11:42:48.024: INFO: Waiting up to 5m0s for pod "downward-api-e837292e-98d3-45d2-ad57-e94b0c64918c" in namespace "downward-api-2818" to be "Succeeded or Failed"

    Sep  2 11:42:48.029: INFO: Pod "downward-api-e837292e-98d3-45d2-ad57-e94b0c64918c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.483065ms
    Sep  2 11:42:50.035: INFO: Pod "downward-api-e837292e-98d3-45d2-ad57-e94b0c64918c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011200724s
    Sep  2 11:42:52.040: INFO: Pod "downward-api-e837292e-98d3-45d2-ad57-e94b0c64918c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016040601s
    Sep  2 11:42:54.045: INFO: Pod "downward-api-e837292e-98d3-45d2-ad57-e94b0c64918c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020758653s
    STEP: Saw pod success
    Sep  2 11:42:54.045: INFO: Pod "downward-api-e837292e-98d3-45d2-ad57-e94b0c64918c" satisfied condition "Succeeded or Failed"

    Sep  2 11:42:54.048: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-worker-cznwre pod downward-api-e837292e-98d3-45d2-ad57-e94b0c64918c container dapi-container: <nil>
    STEP: delete the pod
    Sep  2 11:42:54.074: INFO: Waiting for pod downward-api-e837292e-98d3-45d2-ad57-e94b0c64918c to disappear
    Sep  2 11:42:54.077: INFO: Pod downward-api-e837292e-98d3-45d2-ad57-e94b0c64918c no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 8 lines ...
    Sep  2 11:42:48.106: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0777 on node default medium
    Sep  2 11:42:48.147: INFO: Waiting up to 5m0s for pod "pod-50b4f3bb-0f57-464f-b032-6e4e32503568" in namespace "emptydir-4264" to be "Succeeded or Failed"

    Sep  2 11:42:48.150: INFO: Pod "pod-50b4f3bb-0f57-464f-b032-6e4e32503568": Phase="Pending", Reason="", readiness=false. Elapsed: 3.019356ms
    Sep  2 11:42:50.155: INFO: Pod "pod-50b4f3bb-0f57-464f-b032-6e4e32503568": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008339668s
    Sep  2 11:42:52.170: INFO: Pod "pod-50b4f3bb-0f57-464f-b032-6e4e32503568": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023238575s
    Sep  2 11:42:54.179: INFO: Pod "pod-50b4f3bb-0f57-464f-b032-6e4e32503568": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03234801s
    Sep  2 11:42:56.187: INFO: Pod "pod-50b4f3bb-0f57-464f-b032-6e4e32503568": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.039856556s
    STEP: Saw pod success
    Sep  2 11:42:56.187: INFO: Pod "pod-50b4f3bb-0f57-464f-b032-6e4e32503568" satisfied condition "Succeeded or Failed"

    Sep  2 11:42:56.190: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-worker-k0ejuu pod pod-50b4f3bb-0f57-464f-b032-6e4e32503568 container test-container: <nil>
    STEP: delete the pod
    Sep  2 11:42:56.235: INFO: Waiting for pod pod-50b4f3bb-0f57-464f-b032-6e4e32503568 to disappear
    Sep  2 11:42:56.238: INFO: Pod pod-50b4f3bb-0f57-464f-b032-6e4e32503568 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:42:56.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-4264" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":14,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    W0902 11:42:47.991413      20 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
    Sep  2 11:42:47.991: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test override arguments
    Sep  2 11:42:48.009: INFO: Waiting up to 5m0s for pod "client-containers-dc5494b7-59f7-4c76-b29b-595bc2b5b7f3" in namespace "containers-5607" to be "Succeeded or Failed"

    Sep  2 11:42:48.016: INFO: Pod "client-containers-dc5494b7-59f7-4c76-b29b-595bc2b5b7f3": Phase="Pending", Reason="", readiness=false. Elapsed: 7.143781ms
    Sep  2 11:42:50.021: INFO: Pod "client-containers-dc5494b7-59f7-4c76-b29b-595bc2b5b7f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012022551s
    Sep  2 11:42:52.033: INFO: Pod "client-containers-dc5494b7-59f7-4c76-b29b-595bc2b5b7f3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024155985s
    Sep  2 11:42:54.038: INFO: Pod "client-containers-dc5494b7-59f7-4c76-b29b-595bc2b5b7f3": Phase="Running", Reason="", readiness=true. Elapsed: 6.029190874s
    Sep  2 11:42:56.041: INFO: Pod "client-containers-dc5494b7-59f7-4c76-b29b-595bc2b5b7f3": Phase="Running", Reason="", readiness=false. Elapsed: 8.032672975s
    Sep  2 11:42:58.046: INFO: Pod "client-containers-dc5494b7-59f7-4c76-b29b-595bc2b5b7f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.037508474s
    STEP: Saw pod success
    Sep  2 11:42:58.046: INFO: Pod "client-containers-dc5494b7-59f7-4c76-b29b-595bc2b5b7f3" satisfied condition "Succeeded or Failed"

    Sep  2 11:42:58.050: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-worker-cznwre pod client-containers-dc5494b7-59f7-4c76-b29b-595bc2b5b7f3 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  2 11:42:58.077: INFO: Waiting for pod client-containers-dc5494b7-59f7-4c76-b29b-595bc2b5b7f3 to disappear
    Sep  2 11:42:58.081: INFO: Pod client-containers-dc5494b7-59f7-4c76-b29b-595bc2b5b7f3 no longer exists
    [AfterEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:42:58.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "containers-5607" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":38,"failed":0}

    
    SSSSSSS
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0}

    [BeforeEach] [sig-network] EndpointSlice
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 11:42:54.092: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename endpointslice
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 5 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:42:58.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslice-2629" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":2,"skipped":9,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:43:00.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-5961" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":1,"skipped":11,"failed":0}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:43:00.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-6782" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":2,"skipped":25,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:43:00.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-8866" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":-1,"completed":3,"skipped":30,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:43:13.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-5238" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":4,"skipped":57,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 45 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:43:19.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-9413" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":3,"skipped":38,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:43:23.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-9695" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":39,"failed":0}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
    Sep  2 11:43:25.590: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
    Sep  2 11:43:25.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3785 describe pod agnhost-primary-dnjq9'
    Sep  2 11:43:25.682: INFO: stderr: ""
    Sep  2 11:43:25.682: INFO: stdout: "Name:         agnhost-primary-dnjq9\nNamespace:    kubectl-3785\nPriority:     0\nNode:         k8s-upgrade-and-conformance-rxa2hz-worker-cznwre/172.18.0.7\nStart Time:   Fri, 02 Sep 2022 11:43:24 +0000\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nStatus:       Running\nIP:           192.168.2.7\nIPs:\n  IP:           192.168.2.7\nControlled By:  ReplicationController/agnhost-primary\nContainers:\n  agnhost-primary:\n    Container ID:   containerd://6e3bb15898ddd74b9e1d86ff38cf48d69d967f733edbc03629625f3342a94ed2\n    Image:          k8s.gcr.io/e2e-test-images/agnhost:2.39\n    Image ID:       k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Fri, 02 Sep 2022 11:43:25 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xqzr9 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-xqzr9:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       <nil>\n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              <none>\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  1s    default-scheduler  Successfully assigned kubectl-3785/agnhost-primary-dnjq9 to k8s-upgrade-and-conformance-rxa2hz-worker-cznwre\n  Normal  Pulled     1s    kubelet            Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n  Normal  Created    1s    kubelet            Created container agnhost-primary\n  Normal  Started    0s    kubelet            Started container agnhost-primary\n"
    Sep  2 11:43:25.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3785 describe rc agnhost-primary'
    Sep  2 11:43:25.781: INFO: stderr: ""
    Sep  2 11:43:25.781: INFO: stdout: "Name:         agnhost-primary\nNamespace:    kubectl-3785\nSelector:     app=agnhost,role=primary\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=primary\n  Containers:\n   agnhost-primary:\n    Image:        k8s.gcr.io/e2e-test-images/agnhost:2.39\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  1s    replication-controller  Created pod: agnhost-primary-dnjq9\n"

    Sep  2 11:43:25.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3785 describe service agnhost-primary'
    Sep  2 11:43:25.889: INFO: stderr: ""
    Sep  2 11:43:25.889: INFO: stdout: "Name:              agnhost-primary\nNamespace:         kubectl-3785\nLabels:            app=agnhost\n                   role=primary\nAnnotations:       <none>\nSelector:          app=agnhost,role=primary\nType:              ClusterIP\nIP Family Policy:  SingleStack\nIP Families:       IPv4\nIP:                10.132.153.148\nIPs:               10.132.153.148\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         192.168.2.7:6379\nSession Affinity:  None\nEvents:            <none>\n"
    Sep  2 11:43:25.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3785 describe node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-7zrh5'
    Sep  2 11:43:26.026: INFO: stderr: ""
    Sep  2 11:43:26.026: INFO: stdout: "Name:               k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-7zrh5\nRoles:              <none>\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-7zrh5\n                    kubernetes.io/os=linux\nAnnotations:        cluster.x-k8s.io/cluster-name: k8s-upgrade-and-conformance-rxa2hz\n                    cluster.x-k8s.io/cluster-namespace: k8s-upgrade-and-conformance-7hitst\n                    cluster.x-k8s.io/machine: k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-7zrh5\n                    cluster.x-k8s.io/owner-kind: MachineSet\n                    cluster.x-k8s.io/owner-name: k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457\n                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Fri, 02 Sep 2022 11:40:13 +0000\nTaints:             <none>\nUnschedulable:      false\nLease:\n  HolderIdentity:  k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-7zrh5\n  AcquireTime:     <unset>\n  RenewTime:       Fri, 02 Sep 2022 11:43:16 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Fri, 02 Sep 2022 11:43:03 +0000   Fri, 02 Sep 2022 11:40:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Fri, 02 Sep 2022 11:43:03 +0000   Fri, 02 Sep 2022 11:40:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Fri, 02 Sep 2022 11:43:03 +0000   Fri, 02 Sep 2022 11:40:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Fri, 02 Sep 2022 11:43:03 +0000   Fri, 02 Sep 2022 11:40:33 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.4\n  Hostname:    k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-7zrh5\nCapacity:\n  cpu:                8\n  ephemeral-storage:  253882800Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             65860680Ki\n  pods:               110\nAllocatable:\n  cpu:                8\n  ephemeral-storage:  253882800Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             65860680Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 2d9c61ec7346420081cd270ced2c4c1a\n  System UUID:                189c8259-cd37-4a9e-b612-048fae508215\n  Boot ID:                    08337fa3-7f3f-43ac-8dff-880a96eeece3\n  Kernel Version:             5.4.0-1072-gke\n  OS Image:                   Ubuntu 22.04.1 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.6.7\n  Kubelet Version:            v1.22.13\n  Kube-Proxy Version:         v1.22.13\nPodCIDR:                      192.168.0.0/24\nPodCIDRs:                     192.168.0.0/24\nProviderID:                   docker:////k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-7zrh5\nNon-terminated Pods:          (6 in total)\n  Namespace                   Name                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age\n  ---------                   ----                                             ------------  ----------  ---------------  -------------  ---\n  container-probe-1288        liveness-24763b00-d81b-4153-91de-b391f30c782d    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s\n  kube-system                 kindnet-q6v5l                                    100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3m13s\n  kube-system                 kube-proxy-rp4zl                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m13s\n  services-1619               affinity-clusterip-f5l76                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s\n  svcaccounts-8866            pod-service-account-defaultsa-mountspec          0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s\n  svcaccounts-8866            pod-service-account-mountsa-mountspec            0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests   Limits\n  --------           --------   ------\n  cpu                100m (1%)  100m (1%)\n  memory             50Mi (0%)  50Mi (0%)\n  ephemeral-storage  0 (0%)     0 (0%)\n  hugepages-1Gi      0 (0%)     0 (0%)\n  hugepages-2Mi      0 (0%)     0 (0%)\nEvents:\n  Type    Reason    Age    From        Message\n  ----    ------    ----   ----        -------\n  Normal  Starting  2m57s  kube-proxy  \n"
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:43:26.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-3785" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":-1,"completed":5,"skipped":58,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 11:43:26.182: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-f54b12d2-d6fa-4d47-b59e-5c75e1fa4b26
    STEP: Creating a pod to test consume configMaps
    Sep  2 11:43:26.224: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d5ce6b52-cc14-4ccc-9233-3b556db21211" in namespace "projected-7538" to be "Succeeded or Failed"

    Sep  2 11:43:26.228: INFO: Pod "pod-projected-configmaps-d5ce6b52-cc14-4ccc-9233-3b556db21211": Phase="Pending", Reason="", readiness=false. Elapsed: 3.624649ms
    Sep  2 11:43:28.232: INFO: Pod "pod-projected-configmaps-d5ce6b52-cc14-4ccc-9233-3b556db21211": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008174019s
    Sep  2 11:43:30.237: INFO: Pod "pod-projected-configmaps-d5ce6b52-cc14-4ccc-9233-3b556db21211": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012804659s
    STEP: Saw pod success
    Sep  2 11:43:30.237: INFO: Pod "pod-projected-configmaps-d5ce6b52-cc14-4ccc-9233-3b556db21211" satisfied condition "Succeeded or Failed"

    Sep  2 11:43:30.240: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-j45dw pod pod-projected-configmaps-d5ce6b52-cc14-4ccc-9233-3b556db21211 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  2 11:43:30.266: INFO: Waiting for pod pod-projected-configmaps-d5ce6b52-cc14-4ccc-9233-3b556db21211 to disappear
    Sep  2 11:43:30.269: INFO: Pod pod-projected-configmaps-d5ce6b52-cc14-4ccc-9233-3b556db21211 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:43:30.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-7538" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":87,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-configmap-7t5p
    STEP: Creating a pod to test atomic-volume-subpath
    Sep  2 11:43:30.322: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-7t5p" in namespace "subpath-6054" to be "Succeeded or Failed"

    Sep  2 11:43:30.325: INFO: Pod "pod-subpath-test-configmap-7t5p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.843002ms
    Sep  2 11:43:32.329: INFO: Pod "pod-subpath-test-configmap-7t5p": Phase="Running", Reason="", readiness=true. Elapsed: 2.006964264s
    Sep  2 11:43:34.332: INFO: Pod "pod-subpath-test-configmap-7t5p": Phase="Running", Reason="", readiness=true. Elapsed: 4.009907459s
    Sep  2 11:43:36.337: INFO: Pod "pod-subpath-test-configmap-7t5p": Phase="Running", Reason="", readiness=true. Elapsed: 6.014588634s
    Sep  2 11:43:38.342: INFO: Pod "pod-subpath-test-configmap-7t5p": Phase="Running", Reason="", readiness=true. Elapsed: 8.019521398s
    Sep  2 11:43:40.345: INFO: Pod "pod-subpath-test-configmap-7t5p": Phase="Running", Reason="", readiness=true. Elapsed: 10.022667465s
... skipping 2 lines ...
    Sep  2 11:43:46.359: INFO: Pod "pod-subpath-test-configmap-7t5p": Phase="Running", Reason="", readiness=true. Elapsed: 16.036938127s
    Sep  2 11:43:48.364: INFO: Pod "pod-subpath-test-configmap-7t5p": Phase="Running", Reason="", readiness=true. Elapsed: 18.041269575s
    Sep  2 11:43:50.368: INFO: Pod "pod-subpath-test-configmap-7t5p": Phase="Running", Reason="", readiness=true. Elapsed: 20.045771979s
    Sep  2 11:43:52.373: INFO: Pod "pod-subpath-test-configmap-7t5p": Phase="Running", Reason="", readiness=false. Elapsed: 22.050288066s
    Sep  2 11:43:54.377: INFO: Pod "pod-subpath-test-configmap-7t5p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.054452419s
    STEP: Saw pod success
    Sep  2 11:43:54.377: INFO: Pod "pod-subpath-test-configmap-7t5p" satisfied condition "Succeeded or Failed"

    Sep  2 11:43:54.379: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-7zrh5 pod pod-subpath-test-configmap-7t5p container test-container-subpath-configmap-7t5p: <nil>
    STEP: delete the pod
    Sep  2 11:43:54.407: INFO: Waiting for pod pod-subpath-test-configmap-7t5p to disappear
    Sep  2 11:43:54.410: INFO: Pod pod-subpath-test-configmap-7t5p no longer exists
    STEP: Deleting pod pod-subpath-test-configmap-7t5p
    Sep  2 11:43:54.410: INFO: Deleting pod "pod-subpath-test-configmap-7t5p" in namespace "subpath-6054"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:43:54.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-6054" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":7,"skipped":89,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:43:56.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-9862" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":8,"skipped":90,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 11:43:56.567: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename init-container
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
    [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating the pod
    Sep  2 11:43:56.598: INFO: PodSpec: initContainers in spec.initContainers
    [AfterEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:44:02.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "init-container-6883" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":9,"skipped":160,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 11:44:02.164: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir volume type on node default medium
    Sep  2 11:44:02.205: INFO: Waiting up to 5m0s for pod "pod-8624725e-a0c3-4db2-a7ef-22d717a31c4c" in namespace "emptydir-5046" to be "Succeeded or Failed"

    Sep  2 11:44:02.212: INFO: Pod "pod-8624725e-a0c3-4db2-a7ef-22d717a31c4c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.525152ms
    Sep  2 11:44:04.217: INFO: Pod "pod-8624725e-a0c3-4db2-a7ef-22d717a31c4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011176056s
    Sep  2 11:44:06.222: INFO: Pod "pod-8624725e-a0c3-4db2-a7ef-22d717a31c4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016547457s
    STEP: Saw pod success
    Sep  2 11:44:06.222: INFO: Pod "pod-8624725e-a0c3-4db2-a7ef-22d717a31c4c" satisfied condition "Succeeded or Failed"

    Sep  2 11:44:06.226: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-worker-cznwre pod pod-8624725e-a0c3-4db2-a7ef-22d717a31c4c container test-container: <nil>
    STEP: delete the pod
    Sep  2 11:44:06.245: INFO: Waiting for pod pod-8624725e-a0c3-4db2-a7ef-22d717a31c4c to disappear
    Sep  2 11:44:06.248: INFO: Pod pod-8624725e-a0c3-4db2-a7ef-22d717a31c4c no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:44:06.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-5046" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":165,"failed":0}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:44:13.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-7089" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":11,"skipped":180,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 22 lines ...
    STEP: Destroying namespace "webhook-3744-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":12,"skipped":191,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
    I0902 11:42:58.326832      19 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-1619, replica count: 3
    I0902 11:43:01.377499      19 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
    I0902 11:43:04.378544      19 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
    Sep  2 11:43:04.384: INFO: Creating new exec pod
    Sep  2 11:43:07.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:43:10.020: INFO: rc: 1
    Sep  2 11:43:10.020: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:43:11.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:43:13.210: INFO: rc: 1
    Sep  2 11:43:13.210: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:43:14.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:43:16.194: INFO: rc: 1
    Sep  2 11:43:16.194: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:43:17.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:43:19.196: INFO: rc: 1
    Sep  2 11:43:19.196: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:43:20.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:43:22.204: INFO: rc: 1
    Sep  2 11:43:22.204: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:43:23.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:43:25.176: INFO: rc: 1
    Sep  2 11:43:25.176: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:43:26.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:43:28.195: INFO: rc: 1
    Sep  2 11:43:28.195: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:43:29.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:43:31.176: INFO: rc: 1
    Sep  2 11:43:31.177: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:43:32.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:43:34.192: INFO: rc: 1
    Sep  2 11:43:34.192: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:43:35.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:43:37.163: INFO: rc: 1
    Sep  2 11:43:37.163: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:43:38.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:43:40.186: INFO: rc: 1
    Sep  2 11:43:40.186: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:43:41.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:43:43.187: INFO: rc: 1
    Sep  2 11:43:43.187: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:43:44.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:43:46.177: INFO: rc: 1
    Sep  2 11:43:46.177: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:43:47.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:43:49.188: INFO: rc: 1
    Sep  2 11:43:49.188: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:43:50.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:43:52.183: INFO: rc: 1
    Sep  2 11:43:52.183: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + + ncecho -v hostName -t -w
     2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:43:53.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:43:55.185: INFO: rc: 1
    Sep  2 11:43:55.185: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:43:56.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:43:58.205: INFO: rc: 1
    Sep  2 11:43:58.205: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:43:59.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:44:01.179: INFO: rc: 1
    Sep  2 11:44:01.179: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:44:02.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:44:04.179: INFO: rc: 1
    Sep  2 11:44:04.179: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:44:05.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:44:07.196: INFO: rc: 1
    Sep  2 11:44:07.196: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:44:08.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:44:10.195: INFO: rc: 1
    Sep  2 11:44:10.195: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:44:11.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:44:13.253: INFO: rc: 1
    Sep  2 11:44:13.253: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:44:14.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:44:16.192: INFO: rc: 1
    Sep  2 11:44:16.192: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:44:17.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:44:19.202: INFO: rc: 1
    Sep  2 11:44:19.202: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:44:20.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:44:22.210: INFO: rc: 1
    Sep  2 11:44:22.210: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:44:23.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:44:25.184: INFO: rc: 1
    Sep  2 11:44:25.184: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:44:26.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:44:28.195: INFO: rc: 1
    Sep  2 11:44:28.195: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:44:29.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:44:31.174: INFO: rc: 1
    Sep  2 11:44:31.175: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:44:32.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:44:34.182: INFO: rc: 1
    Sep  2 11:44:34.182: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:44:35.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:44:37.185: INFO: rc: 1
    Sep  2 11:44:37.185: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:44:38.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:44:40.197: INFO: rc: 1
    Sep  2 11:44:40.197: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:44:41.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:44:43.179: INFO: rc: 1
    Sep  2 11:44:43.179: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + nc -v -t -w 2 affinity-clusterip 80
    + echo hostName
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:44:44.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:44:46.172: INFO: rc: 1
    Sep  2 11:44:46.172: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:44:47.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:44:49.195: INFO: rc: 1
    Sep  2 11:44:49.195: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:44:50.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:44:52.195: INFO: rc: 1
    Sep  2 11:44:52.195: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:44:53.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:44:55.196: INFO: rc: 1
    Sep  2 11:44:55.196: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + + echonc hostName -v
     -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:44:56.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:44:58.195: INFO: rc: 1
    Sep  2 11:44:58.195: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:44:59.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:45:01.180: INFO: rc: 1
    Sep  2 11:45:01.180: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:45:02.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:45:04.178: INFO: rc: 1
    Sep  2 11:45:04.178: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:45:05.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:45:07.193: INFO: rc: 1
    Sep  2 11:45:07.193: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:45:08.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:45:10.192: INFO: rc: 1
    Sep  2 11:45:10.192: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:45:10.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  2 11:45:12.344: INFO: rc: 1
    Sep  2 11:45:12.344: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1619 exec execpod-affinityqvx96 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  2 11:45:12.345: FAIL: Unexpected error:

        <*errors.errorString | 0xc0017e6430>: {
            s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol",
        }
        service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol
    occurred
    
... skipping 27 lines ...
    • Failure [136.186 seconds]
    [sig-network] Services
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
      should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  2 11:45:12.345: Unexpected error:

          <*errors.errorString | 0xc0017e6430>: {
              s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol",
          }
          service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3278
    ------------------------------
    {"msg":"FAILED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":38,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 11:45:14.457: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename services
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 42 lines ...
    STEP: Destroying namespace "services-1151" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":38,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 29 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:45:31.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-5960" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":-1,"completed":4,"skipped":51,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] DisruptionController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:45:37.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "disruption-3113" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":5,"skipped":66,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Aggregator
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:45:46.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "aggregator-9795" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":6,"skipped":79,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:45:56.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-4350" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":7,"skipped":82,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:46:12.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-9776" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":8,"skipped":87,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:46:12.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-6939" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":-1,"completed":9,"skipped":94,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
    • [SLOW TEST:242.702 seconds]
    [sig-node] Probing container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":45,"failed":0}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:47:03.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-1289" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":-1,"completed":3,"skipped":60,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
    • [SLOW TEST:242.717 seconds]
    [sig-node] Probing container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":61,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
    STEP: Destroying namespace "webhook-3637-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":6,"skipped":68,"failed":0}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:47:49.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-489" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":7,"skipped":84,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
    STEP: Destroying namespace "services-4886" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":8,"skipped":95,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 11:47:57.993: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-map-bd7ad3c3-4d55-4532-bab5-6b97a2231e5f
    STEP: Creating a pod to test consume configMaps
    Sep  2 11:47:58.070: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c7dbb041-0587-4f52-b140-d851957d3b53" in namespace "projected-5957" to be "Succeeded or Failed"

    Sep  2 11:47:58.081: INFO: Pod "pod-projected-configmaps-c7dbb041-0587-4f52-b140-d851957d3b53": Phase="Pending", Reason="", readiness=false. Elapsed: 10.653698ms
    Sep  2 11:48:00.088: INFO: Pod "pod-projected-configmaps-c7dbb041-0587-4f52-b140-d851957d3b53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017803648s
    Sep  2 11:48:02.095: INFO: Pod "pod-projected-configmaps-c7dbb041-0587-4f52-b140-d851957d3b53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024379085s
    STEP: Saw pod success
    Sep  2 11:48:02.095: INFO: Pod "pod-projected-configmaps-c7dbb041-0587-4f52-b140-d851957d3b53" satisfied condition "Succeeded or Failed"

    Sep  2 11:48:02.099: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-worker-k0ejuu pod pod-projected-configmaps-c7dbb041-0587-4f52-b140-d851957d3b53 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  2 11:48:02.137: INFO: Waiting for pod pod-projected-configmaps-c7dbb041-0587-4f52-b140-d851957d3b53 to disappear
    Sep  2 11:48:02.141: INFO: Pod pod-projected-configmaps-c7dbb041-0587-4f52-b140-d851957d3b53 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:48:02.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-5957" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":176,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:48:04.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "containers-6268" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":179,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide container's memory limit [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  2 11:48:04.402: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bc445816-2f3f-45d3-b9ce-880cb78d1e32" in namespace "downward-api-5807" to be "Succeeded or Failed"

    Sep  2 11:48:04.407: INFO: Pod "downwardapi-volume-bc445816-2f3f-45d3-b9ce-880cb78d1e32": Phase="Pending", Reason="", readiness=false. Elapsed: 5.270213ms
    Sep  2 11:48:06.415: INFO: Pod "downwardapi-volume-bc445816-2f3f-45d3-b9ce-880cb78d1e32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012547204s
    Sep  2 11:48:08.421: INFO: Pod "downwardapi-volume-bc445816-2f3f-45d3-b9ce-880cb78d1e32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019296586s
    STEP: Saw pod success
    Sep  2 11:48:08.421: INFO: Pod "downwardapi-volume-bc445816-2f3f-45d3-b9ce-880cb78d1e32" satisfied condition "Succeeded or Failed"

    Sep  2 11:48:08.427: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-worker-k0ejuu pod downwardapi-volume-bc445816-2f3f-45d3-b9ce-880cb78d1e32 container client-container: <nil>
    STEP: delete the pod
    Sep  2 11:48:08.451: INFO: Waiting for pod downwardapi-volume-bc445816-2f3f-45d3-b9ce-880cb78d1e32 to disappear
    Sep  2 11:48:08.457: INFO: Pod downwardapi-volume-bc445816-2f3f-45d3-b9ce-880cb78d1e32 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:48:08.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-5807" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":183,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:48:12.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-8337" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":194,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
    • [SLOW TEST:244.806 seconds]
    [sig-node] Probing container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":192,"failed":0}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 37 lines ...
    Sep  2 11:48:20.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-9039 explain e2e-test-crd-publish-openapi-5108-crds.spec'
    Sep  2 11:48:20.966: INFO: stderr: ""
    Sep  2 11:48:20.966: INFO: stdout: "KIND:     e2e-test-crd-publish-openapi-5108-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
    Sep  2 11:48:20.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-9039 explain e2e-test-crd-publish-openapi-5108-crds.spec.bars'
    Sep  2 11:48:21.415: INFO: stderr: ""
    Sep  2 11:48:21.415: INFO: stdout: "KIND:     e2e-test-crd-publish-openapi-5108-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
    STEP: kubectl explain works to return error when explain is called on property that doesn't exist

    Sep  2 11:48:21.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-9039 explain e2e-test-crd-publish-openapi-5108-crds.spec.bars2'
    Sep  2 11:48:21.861: INFO: rc: 1
    [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:48:25.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-9039" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":13,"skipped":201,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-instrumentation] Events API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:48:25.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-5421" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":14,"skipped":202,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 11:48:24.790: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-d3fc3411-be0e-4528-baf9-a6a844202b8b
    STEP: Creating a pod to test consume configMaps
    Sep  2 11:48:24.873: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-802b50b7-6a5b-4dde-a322-5665734567ed" in namespace "projected-1825" to be "Succeeded or Failed"

    Sep  2 11:48:24.878: INFO: Pod "pod-projected-configmaps-802b50b7-6a5b-4dde-a322-5665734567ed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.54166ms
    Sep  2 11:48:26.885: INFO: Pod "pod-projected-configmaps-802b50b7-6a5b-4dde-a322-5665734567ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011804056s
    Sep  2 11:48:28.894: INFO: Pod "pod-projected-configmaps-802b50b7-6a5b-4dde-a322-5665734567ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020711395s
    STEP: Saw pod success
    Sep  2 11:48:28.895: INFO: Pod "pod-projected-configmaps-802b50b7-6a5b-4dde-a322-5665734567ed" satisfied condition "Succeeded or Failed"

    Sep  2 11:48:28.899: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-7zrh5 pod pod-projected-configmaps-802b50b7-6a5b-4dde-a322-5665734567ed container agnhost-container: <nil>
    STEP: delete the pod
    Sep  2 11:48:28.944: INFO: Waiting for pod pod-projected-configmaps-802b50b7-6a5b-4dde-a322-5665734567ed to disappear
    Sep  2 11:48:28.949: INFO: Pod pod-projected-configmaps-802b50b7-6a5b-4dde-a322-5665734567ed no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:48:28.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-1825" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":206,"failed":0}

    [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 11:48:28.970: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename crd-webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 18 lines ...
    STEP: Destroying namespace "crd-webhook-7224" for this suite.
    [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":15,"skipped":206,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 191 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:48:47.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-8607" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":16,"skipped":211,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] CronJob
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:48:48.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "cronjob-867" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":17,"skipped":222,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 83 lines ...
    STEP: Destroying namespace "services-3399" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":15,"skipped":240,"failed":0}

    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 11:49:06.478: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename svcaccounts
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should mount projected service account token [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test service account token: 
    Sep  2 11:49:06.543: INFO: Waiting up to 5m0s for pod "test-pod-9a72eac3-1328-43fa-b3ee-6a59245c8b42" in namespace "svcaccounts-3690" to be "Succeeded or Failed"

    Sep  2 11:49:06.549: INFO: Pod "test-pod-9a72eac3-1328-43fa-b3ee-6a59245c8b42": Phase="Pending", Reason="", readiness=false. Elapsed: 5.662481ms
    Sep  2 11:49:08.556: INFO: Pod "test-pod-9a72eac3-1328-43fa-b3ee-6a59245c8b42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012364077s
    Sep  2 11:49:10.564: INFO: Pod "test-pod-9a72eac3-1328-43fa-b3ee-6a59245c8b42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02035535s
    STEP: Saw pod success
    Sep  2 11:49:10.564: INFO: Pod "test-pod-9a72eac3-1328-43fa-b3ee-6a59245c8b42" satisfied condition "Succeeded or Failed"

    Sep  2 11:49:10.573: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-7zrh5 pod test-pod-9a72eac3-1328-43fa-b3ee-6a59245c8b42 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  2 11:49:10.605: INFO: Waiting for pod test-pod-9a72eac3-1328-43fa-b3ee-6a59245c8b42 to disappear
    Sep  2 11:49:10.613: INFO: Pod test-pod-9a72eac3-1328-43fa-b3ee-6a59245c8b42 no longer exists
    [AfterEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:49:10.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-3690" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":16,"skipped":240,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 3 lines ...
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752
    [It] should serve a basic endpoint from pods  [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating service endpoint-test2 in namespace services-9148
    STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9148 to expose endpoints map[]
    Sep  2 11:47:03.701: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found

    Sep  2 11:47:04.710: INFO: successfully validated that service endpoint-test2 in namespace services-9148 exposes endpoints map[]
    STEP: Creating pod pod1 in namespace services-9148
    Sep  2 11:47:04.723: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true)
    Sep  2 11:47:06.728: INFO: The status of Pod pod1 is Running (Ready = true)
    STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9148 to expose endpoints map[pod1:[80]]
    Sep  2 11:47:06.742: INFO: successfully validated that service endpoint-test2 in namespace services-9148 exposes endpoints map[pod1:[80]]
... skipping 122 lines ...
    Sep  2 11:49:09.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9148 exec execpodzhwpj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
    Sep  2 11:49:12.247: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n"
    Sep  2 11:49:12.247: INFO: stdout: ""
    Sep  2 11:49:12.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9148 exec execpodzhwpj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
    Sep  2 11:49:14.632: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n"
    Sep  2 11:49:14.632: INFO: stdout: ""
    Sep  2 11:49:14.633: FAIL: Unexpected error:

        <*errors.errorString | 0xc0007dc5e0>: {
            s: "service is not reachable within 2m0s timeout on endpoint endpoint-test2:80 over TCP protocol",
        }
        service is not reachable within 2m0s timeout on endpoint endpoint-test2:80 over TCP protocol
    occurred
    
... skipping 19 lines ...
    • Failure [131.132 seconds]
    [sig-network] Services
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
      should serve a basic endpoint from pods  [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  2 11:49:14.633: Unexpected error:

          <*errors.errorString | 0xc0007dc5e0>: {
              s: "service is not reachable within 2m0s timeout on endpoint endpoint-test2:80 over TCP protocol",
          }
          service is not reachable within 2m0s timeout on endpoint endpoint-test2:80 over TCP protocol
      occurred
    
... skipping 6 lines ...
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-map-6f9574d2-b807-4aac-a64b-c9f716c79af3
    STEP: Creating a pod to test consume configMaps
    Sep  2 11:49:10.807: INFO: Waiting up to 5m0s for pod "pod-configmaps-38d5b4c6-21a4-41f9-a40d-88b564ae164d" in namespace "configmap-1744" to be "Succeeded or Failed"

    Sep  2 11:49:10.822: INFO: Pod "pod-configmaps-38d5b4c6-21a4-41f9-a40d-88b564ae164d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.578195ms
    Sep  2 11:49:12.830: INFO: Pod "pod-configmaps-38d5b4c6-21a4-41f9-a40d-88b564ae164d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021152399s
    Sep  2 11:49:14.881: INFO: Pod "pod-configmaps-38d5b4c6-21a4-41f9-a40d-88b564ae164d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072592251s
    STEP: Saw pod success
    Sep  2 11:49:14.881: INFO: Pod "pod-configmaps-38d5b4c6-21a4-41f9-a40d-88b564ae164d" satisfied condition "Succeeded or Failed"

    Sep  2 11:49:14.916: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-worker-k0ejuu pod pod-configmaps-38d5b4c6-21a4-41f9-a40d-88b564ae164d container agnhost-container: <nil>
    STEP: delete the pod
    Sep  2 11:49:14.983: INFO: Waiting for pod pod-configmaps-38d5b4c6-21a4-41f9-a40d-88b564ae164d to disappear
    Sep  2 11:49:15.016: INFO: Pod pod-configmaps-38d5b4c6-21a4-41f9-a40d-88b564ae164d no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:49:15.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-1744" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":250,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:49:15.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-5667" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":18,"skipped":281,"failed":0}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  2 11:49:15.448: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cae3a296-088f-4466-b3b7-118b917656d2" in namespace "projected-4287" to be "Succeeded or Failed"

    Sep  2 11:49:15.453: INFO: Pod "downwardapi-volume-cae3a296-088f-4466-b3b7-118b917656d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.505888ms
    Sep  2 11:49:17.471: INFO: Pod "downwardapi-volume-cae3a296-088f-4466-b3b7-118b917656d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022835668s
    Sep  2 11:49:19.478: INFO: Pod "downwardapi-volume-cae3a296-088f-4466-b3b7-118b917656d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029852618s
    STEP: Saw pod success
    Sep  2 11:49:19.479: INFO: Pod "downwardapi-volume-cae3a296-088f-4466-b3b7-118b917656d2" satisfied condition "Succeeded or Failed"

    Sep  2 11:49:19.484: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-j45dw pod downwardapi-volume-cae3a296-088f-4466-b3b7-118b917656d2 container client-container: <nil>
    STEP: delete the pod
    Sep  2 11:49:19.522: INFO: Waiting for pod downwardapi-volume-cae3a296-088f-4466-b3b7-118b917656d2 to disappear
    Sep  2 11:49:19.529: INFO: Pod downwardapi-volume-cae3a296-088f-4466-b3b7-118b917656d2 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:49:19.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-4287" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":294,"failed":0}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 43 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:49:19.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pod-network-test-8858" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":228,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":-1,"completed":3,"skipped":71,"failed":1,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]"]}

    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 11:49:14.827: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename services
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752
    [It] should serve a basic endpoint from pods  [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating service endpoint-test2 in namespace services-2108
    STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2108 to expose endpoints map[]
    Sep  2 11:49:15.091: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found

    Sep  2 11:49:16.103: INFO: successfully validated that service endpoint-test2 in namespace services-2108 exposes endpoints map[]
    STEP: Creating pod pod1 in namespace services-2108
    Sep  2 11:49:16.118: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true)
    Sep  2 11:49:18.125: INFO: The status of Pod pod1 is Running (Ready = true)
    STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2108 to expose endpoints map[pod1:[80]]
    Sep  2 11:49:18.157: INFO: successfully validated that service endpoint-test2 in namespace services-2108 exposes endpoints map[pod1:[80]]
... skipping 37 lines ...
    STEP: Destroying namespace "services-2108" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":-1,"completed":4,"skipped":71,"failed":1,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Job
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 11:49:19.583: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename job
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a job
    STEP: Ensuring job reaches completions
    [AfterEach] [sig-apps] Job
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:49:31.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "job-3471" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":20,"skipped":310,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 11:49:31.595: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-4d812276-3424-418b-9715-2fddfb7241a4
    STEP: Creating a pod to test consume secrets
    Sep  2 11:49:31.688: INFO: Waiting up to 5m0s for pod "pod-secrets-ad7b47b6-2c14-4830-afb4-fb545dd98b82" in namespace "secrets-8998" to be "Succeeded or Failed"

    Sep  2 11:49:31.694: INFO: Pod "pod-secrets-ad7b47b6-2c14-4830-afb4-fb545dd98b82": Phase="Pending", Reason="", readiness=false. Elapsed: 5.926981ms
    Sep  2 11:49:33.701: INFO: Pod "pod-secrets-ad7b47b6-2c14-4830-afb4-fb545dd98b82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0130868s
    Sep  2 11:49:35.707: INFO: Pod "pod-secrets-ad7b47b6-2c14-4830-afb4-fb545dd98b82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019283069s
    STEP: Saw pod success
    Sep  2 11:49:35.708: INFO: Pod "pod-secrets-ad7b47b6-2c14-4830-afb4-fb545dd98b82" satisfied condition "Succeeded or Failed"

    Sep  2 11:49:35.713: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-7zrh5 pod pod-secrets-ad7b47b6-2c14-4830-afb4-fb545dd98b82 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep  2 11:49:35.738: INFO: Waiting for pod pod-secrets-ad7b47b6-2c14-4830-afb4-fb545dd98b82 to disappear
    Sep  2 11:49:35.742: INFO: Pod pod-secrets-ad7b47b6-2c14-4830-afb4-fb545dd98b82 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:49:35.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-8998" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":130,"failed":1,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]"]}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] EndpointSliceMirroring
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:49:38.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslicemirroring-6459" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":21,"skipped":384,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:49:44.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-9026" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":-1,"completed":6,"skipped":144,"failed":1,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 28 lines ...
    STEP: Destroying namespace "webhook-9305-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":22,"skipped":385,"failed":0}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
    STEP: Destroying namespace "crd-webhook-6779" for this suite.
    [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":23,"skipped":404,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with projected pod [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-projected-mjsw
    STEP: Creating a pod to test atomic-volume-subpath
    Sep  2 11:49:44.300: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-mjsw" in namespace "subpath-6041" to be "Succeeded or Failed"

    Sep  2 11:49:44.305: INFO: Pod "pod-subpath-test-projected-mjsw": Phase="Pending", Reason="", readiness=false. Elapsed: 5.017523ms
    Sep  2 11:49:46.311: INFO: Pod "pod-subpath-test-projected-mjsw": Phase="Running", Reason="", readiness=true. Elapsed: 2.011251252s
    Sep  2 11:49:48.318: INFO: Pod "pod-subpath-test-projected-mjsw": Phase="Running", Reason="", readiness=true. Elapsed: 4.01844794s
    Sep  2 11:49:50.325: INFO: Pod "pod-subpath-test-projected-mjsw": Phase="Running", Reason="", readiness=true. Elapsed: 6.024756312s
    Sep  2 11:49:52.332: INFO: Pod "pod-subpath-test-projected-mjsw": Phase="Running", Reason="", readiness=true. Elapsed: 8.03222345s
    Sep  2 11:49:54.339: INFO: Pod "pod-subpath-test-projected-mjsw": Phase="Running", Reason="", readiness=true. Elapsed: 10.03902352s
... skipping 2 lines ...
    Sep  2 11:50:00.393: INFO: Pod "pod-subpath-test-projected-mjsw": Phase="Running", Reason="", readiness=true. Elapsed: 16.092804217s
    Sep  2 11:50:02.401: INFO: Pod "pod-subpath-test-projected-mjsw": Phase="Running", Reason="", readiness=true. Elapsed: 18.101230293s
    Sep  2 11:50:04.410: INFO: Pod "pod-subpath-test-projected-mjsw": Phase="Running", Reason="", readiness=true. Elapsed: 20.109993449s
    Sep  2 11:50:06.416: INFO: Pod "pod-subpath-test-projected-mjsw": Phase="Running", Reason="", readiness=false. Elapsed: 22.116275714s
    Sep  2 11:50:08.426: INFO: Pod "pod-subpath-test-projected-mjsw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.126361024s
    STEP: Saw pod success
    Sep  2 11:50:08.426: INFO: Pod "pod-subpath-test-projected-mjsw" satisfied condition "Succeeded or Failed"

    Sep  2 11:50:08.433: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-7zrh5 pod pod-subpath-test-projected-mjsw container test-container-subpath-projected-mjsw: <nil>
    STEP: delete the pod
    Sep  2 11:50:08.457: INFO: Waiting for pod pod-subpath-test-projected-mjsw to disappear
    Sep  2 11:50:08.460: INFO: Pod pod-subpath-test-projected-mjsw no longer exists
    STEP: Deleting pod pod-subpath-test-projected-mjsw
    Sep  2 11:50:08.461: INFO: Deleting pod "pod-subpath-test-projected-mjsw" in namespace "subpath-6041"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:50:08.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-6041" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":7,"skipped":181,"failed":1,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 11:50:08.493: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-360a47b7-da62-4a64-b958-569dac610899
    STEP: Creating a pod to test consume secrets
    Sep  2 11:50:08.670: INFO: Waiting up to 5m0s for pod "pod-secrets-d1504e97-279f-4b19-aaf9-292cf5cd3477" in namespace "secrets-8172" to be "Succeeded or Failed"

    Sep  2 11:50:08.680: INFO: Pod "pod-secrets-d1504e97-279f-4b19-aaf9-292cf5cd3477": Phase="Pending", Reason="", readiness=false. Elapsed: 9.672707ms
    Sep  2 11:50:10.686: INFO: Pod "pod-secrets-d1504e97-279f-4b19-aaf9-292cf5cd3477": Phase="Running", Reason="", readiness=false. Elapsed: 2.015842014s
    Sep  2 11:50:12.696: INFO: Pod "pod-secrets-d1504e97-279f-4b19-aaf9-292cf5cd3477": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025888769s
    STEP: Saw pod success
    Sep  2 11:50:12.696: INFO: Pod "pod-secrets-d1504e97-279f-4b19-aaf9-292cf5cd3477" satisfied condition "Succeeded or Failed"

    Sep  2 11:50:12.704: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-worker-k0ejuu pod pod-secrets-d1504e97-279f-4b19-aaf9-292cf5cd3477 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep  2 11:50:12.736: INFO: Waiting for pod pod-secrets-d1504e97-279f-4b19-aaf9-292cf5cd3477 to disappear
    Sep  2 11:50:12.742: INFO: Pod pod-secrets-d1504e97-279f-4b19-aaf9-292cf5cd3477 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:50:12.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-8172" for this suite.
    STEP: Destroying namespace "secret-namespace-5939" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":183,"failed":1,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 11:50:12.850: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0666 on tmpfs
    Sep  2 11:50:12.939: INFO: Waiting up to 5m0s for pod "pod-3c6600e2-079a-40a6-907a-5679a45d2d3a" in namespace "emptydir-7336" to be "Succeeded or Failed"

    Sep  2 11:50:12.948: INFO: Pod "pod-3c6600e2-079a-40a6-907a-5679a45d2d3a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.278987ms
    Sep  2 11:50:14.955: INFO: Pod "pod-3c6600e2-079a-40a6-907a-5679a45d2d3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01527768s
    Sep  2 11:50:16.962: INFO: Pod "pod-3c6600e2-079a-40a6-907a-5679a45d2d3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023024297s
    STEP: Saw pod success
    Sep  2 11:50:16.963: INFO: Pod "pod-3c6600e2-079a-40a6-907a-5679a45d2d3a" satisfied condition "Succeeded or Failed"

    Sep  2 11:50:16.970: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-7zrh5 pod pod-3c6600e2-079a-40a6-907a-5679a45d2d3a container test-container: <nil>
    STEP: delete the pod
    Sep  2 11:50:17.003: INFO: Waiting for pod pod-3c6600e2-079a-40a6-907a-5679a45d2d3a to disappear
    Sep  2 11:50:17.010: INFO: Pod pod-3c6600e2-079a-40a6-907a-5679a45d2d3a no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:50:17.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-7336" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":211,"failed":1,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:50:20.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-8110" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":215,"failed":1,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 37 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:50:20.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-9951" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","total":-1,"completed":24,"skipped":429,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:50:29.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-9390" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]","total":-1,"completed":11,"skipped":241,"failed":1,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:50:35.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-3551" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":12,"skipped":263,"failed":1,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-apps] DisruptionController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:50:41.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "disruption-8251" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":13,"skipped":266,"failed":1,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:50:43.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-8237" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":274,"failed":1,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]"]}

    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 11:50:43.771: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename sysctl
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65
    [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
    STEP: Watching for error events or started pod

    STEP: Waiting for pod completion
    STEP: Checking that the pod succeeded
    STEP: Getting logs from the pod
    STEP: Checking that the sysctl is actually updated
    [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:50:47.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "sysctl-8698" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":15,"skipped":274,"failed":1,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 149 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:51:15.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-556" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":-1,"completed":16,"skipped":282,"failed":1,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 11:51:15.265: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0666 on node default medium
    Sep  2 11:51:15.540: INFO: Waiting up to 5m0s for pod "pod-33ce5dfb-d30e-4942-99cb-930d69f413fd" in namespace "emptydir-673" to be "Succeeded or Failed"

    Sep  2 11:51:15.546: INFO: Pod "pod-33ce5dfb-d30e-4942-99cb-930d69f413fd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055011ms
    Sep  2 11:51:17.550: INFO: Pod "pod-33ce5dfb-d30e-4942-99cb-930d69f413fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010490413s
    Sep  2 11:51:19.557: INFO: Pod "pod-33ce5dfb-d30e-4942-99cb-930d69f413fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016968898s
    STEP: Saw pod success
    Sep  2 11:51:19.557: INFO: Pod "pod-33ce5dfb-d30e-4942-99cb-930d69f413fd" satisfied condition "Succeeded or Failed"

    Sep  2 11:51:19.561: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-7zrh5 pod pod-33ce5dfb-d30e-4942-99cb-930d69f413fd container test-container: <nil>
    STEP: delete the pod
    Sep  2 11:51:19.581: INFO: Waiting for pod pod-33ce5dfb-d30e-4942-99cb-930d69f413fd to disappear
    Sep  2 11:51:19.584: INFO: Pod pod-33ce5dfb-d30e-4942-99cb-930d69f413fd no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:51:19.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-673" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":293,"failed":1,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
    
    STEP: creating a pod to probe DNS
    STEP: submitting the pod to kubernetes
    STEP: retrieving the pod
    STEP: looking for the results for each expected name from probers
    Sep  2 11:49:54.829: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-837.svc.cluster.local from pod dns-837/dns-test-0eb359a4-7e52-4342-b611-2b9e20c33ef7: the server is currently unable to handle the request (get pods dns-test-0eb359a4-7e52-4342-b611-2b9e20c33ef7)
    Sep  2 11:51:20.787: FAIL: Unable to read wheezy_hosts@dns-querier-2 from pod dns-837/dns-test-0eb359a4-7e52-4342-b611-2b9e20c33ef7: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-837/pods/dns-test-0eb359a4-7e52-4342-b611-2b9e20c33ef7/proxy/results/wheezy_hosts@dns-querier-2": context deadline exceeded

    
    Full Stack Trace
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x78de4a8, 0xc000054090, 0x7f2d50303a68, 0x18, 0xc005de96b0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x78de4a8, 0xc000054090, 0xc004bfece0, 0x2a14500, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f
... skipping 17 lines ...
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b
    testing.tRunner(0xc00031bb00, 0x729a2d8)
    	/usr/local/go/src/testing/testing.go:1203 +0xe5
    created by testing.(*T).Run
    	/usr/local/go/src/testing/testing.go:1248 +0x2b3
    E0902 11:51:20.788632      19 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Sep  2 11:51:20.787: Unable to read wheezy_hosts@dns-querier-2 from pod dns-837/dns-test-0eb359a4-7e52-4342-b611-2b9e20c33ef7: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-837/pods/dns-test-0eb359a4-7e52-4342-b611-2b9e20c33ef7/proxy/results/wheezy_hosts@dns-querier-2\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:217, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x78de4a8, 0xc000054090, 0x7f2d50303a68, 0x18, 0xc005de96b0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x78de4a8, 0xc000054090, 0xc004bfece0, 0x2a14500, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll(0x78de4a8, 0xc000054090, 0xc005de9601, 0xc005de96b0, 0xc004bfece0, 0x6826620, 0xc004bfece0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:577 +0xe5\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext(0x78de4a8, 0xc000054090, 0x12a05f200, 0x8bb2c97000, 0xc004bfece0, 0x6d6e4e0, 0x2521201)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:523 +0x66\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc003af7180, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:509 +0x6f\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc002f26f00, 0x8, 0x8, 0x702fe9b, 0x7, 0xc000303000, 0x7971668, 0xc005354420, 0x0, 0x0, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463 +0x13c\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:457\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000d39600, 0xc000303000, 0xc002f26f00, 0x8, 0x8)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:520 +0x365\nk8s.io/kubernetes/test/e2e/network.glob..func2.7()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:279 +0x9f3\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc00031bb00)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc00031bb00)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b\ntesting.tRunner(0xc00031bb00, 0x729a2d8)\n\t/usr/local/go/src/testing/testing.go:1203 +0xe5\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1248 +0x2b3"} (
    Your test failed.

    Ginkgo panics to prevent subsequent assertions from running.
    Normally Ginkgo rescues this panic so you shouldn't see it.
    
    But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
    To circumvent this, you should call
    
... skipping 5 lines ...
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6bbe4c0, 0xc003a30100)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
    panic(0x6bbe4c0, 0xc003a30100)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc0056ee140, 0x12d, 0x88abe86, 0x7d, 0xd9, 0xc0004f6400, 0xa88)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
    panic(0x62ef260, 0x77956f0)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc0056ee140, 0x12d, 0xc002463638, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:260 +0xc8
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc0056ee140, 0x12d, 0xc002463720, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
    k8s.io/kubernetes/test/e2e/framework.Failf(0x70d3e4f, 0x24, 0xc002463980, 0x4, 0x4)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
    k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:480 +0xab1
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x78de4a8, 0xc000054090, 0x7f2d50303a68, 0x18, 0xc005de96b0)
... skipping 69 lines ...
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-3f1465d3-8f5b-4de5-9dc4-a8293ac4d578
    STEP: Creating a pod to test consume secrets
    Sep  2 11:51:19.685: INFO: Waiting up to 5m0s for pod "pod-secrets-50575439-235b-403e-b76b-ba628ddd516c" in namespace "secrets-8968" to be "Succeeded or Failed"

    Sep  2 11:51:19.689: INFO: Pod "pod-secrets-50575439-235b-403e-b76b-ba628ddd516c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.726224ms
    Sep  2 11:51:21.694: INFO: Pod "pod-secrets-50575439-235b-403e-b76b-ba628ddd516c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008773313s
    Sep  2 11:51:23.701: INFO: Pod "pod-secrets-50575439-235b-403e-b76b-ba628ddd516c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016402766s
    STEP: Saw pod success
    Sep  2 11:51:23.702: INFO: Pod "pod-secrets-50575439-235b-403e-b76b-ba628ddd516c" satisfied condition "Succeeded or Failed"

    Sep  2 11:51:23.705: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-j45dw pod pod-secrets-50575439-235b-403e-b76b-ba628ddd516c container secret-volume-test: <nil>
    STEP: delete the pod
    Sep  2 11:51:23.726: INFO: Waiting for pod pod-secrets-50575439-235b-403e-b76b-ba628ddd516c to disappear
    Sep  2 11:51:23.729: INFO: Pod pod-secrets-50575439-235b-403e-b76b-ba628ddd516c no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:51:23.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-8968" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":317,"failed":1,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":9,"skipped":108,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 11:51:20.836: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename dns
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 16 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:51:28.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-66" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":10,"skipped":108,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 22 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:51:37.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-lifecycle-hook-3823" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":125,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-network] HostPort
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 29 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:51:39.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "hostport-2220" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":19,"skipped":354,"failed":1,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:51:41.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-9660" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":126,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:51:42.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-5024" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":-1,"completed":20,"skipped":376,"failed":1,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:51:43.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-3205" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":474,"failed":0}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
    STEP: Destroying namespace "services-8296" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":26,"skipped":491,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 11:51:41.795: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-b159cdc3-f48d-4d5a-8443-471734d90630
    STEP: Creating a pod to test consume secrets
    Sep  2 11:51:41.840: INFO: Waiting up to 5m0s for pod "pod-secrets-d2697ba2-faed-457d-8631-53f6afaae784" in namespace "secrets-5632" to be "Succeeded or Failed"

    Sep  2 11:51:41.844: INFO: Pod "pod-secrets-d2697ba2-faed-457d-8631-53f6afaae784": Phase="Pending", Reason="", readiness=false. Elapsed: 4.004072ms
    Sep  2 11:51:43.849: INFO: Pod "pod-secrets-d2697ba2-faed-457d-8631-53f6afaae784": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008294285s
    Sep  2 11:51:45.852: INFO: Pod "pod-secrets-d2697ba2-faed-457d-8631-53f6afaae784": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012013412s
    STEP: Saw pod success
    Sep  2 11:51:45.852: INFO: Pod "pod-secrets-d2697ba2-faed-457d-8631-53f6afaae784" satisfied condition "Succeeded or Failed"

    Sep  2 11:51:45.855: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-worker-cznwre pod pod-secrets-d2697ba2-faed-457d-8631-53f6afaae784 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep  2 11:51:45.878: INFO: Waiting for pod pod-secrets-d2697ba2-faed-457d-8631-53f6afaae784 to disappear
    Sep  2 11:51:45.881: INFO: Pod pod-secrets-d2697ba2-faed-457d-8631-53f6afaae784 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:51:45.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-5632" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":150,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 11:51:43.845: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should allow substituting values in a volume subpath [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test substitution in volume subpath
    Sep  2 11:51:43.883: INFO: Waiting up to 5m0s for pod "var-expansion-fbb5ef6e-4d18-4c02-a4f9-db7583bcc041" in namespace "var-expansion-7736" to be "Succeeded or Failed"

    Sep  2 11:51:43.887: INFO: Pod "var-expansion-fbb5ef6e-4d18-4c02-a4f9-db7583bcc041": Phase="Pending", Reason="", readiness=false. Elapsed: 2.886583ms
    Sep  2 11:51:45.891: INFO: Pod "var-expansion-fbb5ef6e-4d18-4c02-a4f9-db7583bcc041": Phase="Running", Reason="", readiness=false. Elapsed: 2.007509593s
    Sep  2 11:51:47.897: INFO: Pod "var-expansion-fbb5ef6e-4d18-4c02-a4f9-db7583bcc041": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013054554s
    STEP: Saw pod success
    Sep  2 11:51:47.897: INFO: Pod "var-expansion-fbb5ef6e-4d18-4c02-a4f9-db7583bcc041" satisfied condition "Succeeded or Failed"

    Sep  2 11:51:47.900: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-7zrh5 pod var-expansion-fbb5ef6e-4d18-4c02-a4f9-db7583bcc041 container dapi-container: <nil>
    STEP: delete the pod
    Sep  2 11:51:47.918: INFO: Waiting for pod var-expansion-fbb5ef6e-4d18-4c02-a4f9-db7583bcc041 to disappear
    Sep  2 11:51:47.921: INFO: Pod var-expansion-fbb5ef6e-4d18-4c02-a4f9-db7583bcc041 no longer exists
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:51:47.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-7736" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":27,"skipped":501,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:51:47.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-5013" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":28,"skipped":507,"failed":0}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:51:48.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-7710" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":-1,"completed":29,"skipped":526,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:51:48.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-7861" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":30,"skipped":527,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
    STEP: Destroying namespace "services-7340" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":31,"skipped":535,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 11:51:45.892: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-map-d40b548f-d436-4fee-a7d0-53abdbb1612b
    STEP: Creating a pod to test consume configMaps
    Sep  2 11:51:45.929: INFO: Waiting up to 5m0s for pod "pod-configmaps-d2789014-152a-4391-b5df-98768f99616b" in namespace "configmap-8425" to be "Succeeded or Failed"

    Sep  2 11:51:45.933: INFO: Pod "pod-configmaps-d2789014-152a-4391-b5df-98768f99616b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.863527ms
    Sep  2 11:51:47.937: INFO: Pod "pod-configmaps-d2789014-152a-4391-b5df-98768f99616b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007556737s
    Sep  2 11:51:49.942: INFO: Pod "pod-configmaps-d2789014-152a-4391-b5df-98768f99616b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01183081s
    STEP: Saw pod success
    Sep  2 11:51:49.942: INFO: Pod "pod-configmaps-d2789014-152a-4391-b5df-98768f99616b" satisfied condition "Succeeded or Failed"

    Sep  2 11:51:49.945: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-worker-cznwre pod pod-configmaps-d2789014-152a-4391-b5df-98768f99616b container agnhost-container: <nil>
    STEP: delete the pod
    Sep  2 11:51:49.961: INFO: Waiting for pod pod-configmaps-d2789014-152a-4391-b5df-98768f99616b to disappear
    Sep  2 11:51:49.963: INFO: Pod pod-configmaps-d2789014-152a-4391-b5df-98768f99616b no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:51:49.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-8425" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":151,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 22 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:51:50.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-lifecycle-hook-4630" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":385,"failed":1,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide container's cpu request [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  2 11:51:48.351: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4d625c6a-e1fc-4c8e-ba60-43dc78966d30" in namespace "downward-api-6688" to be "Succeeded or Failed"

    Sep  2 11:51:48.355: INFO: Pod "downwardapi-volume-4d625c6a-e1fc-4c8e-ba60-43dc78966d30": Phase="Pending", Reason="", readiness=false. Elapsed: 3.400205ms
    Sep  2 11:51:50.359: INFO: Pod "downwardapi-volume-4d625c6a-e1fc-4c8e-ba60-43dc78966d30": Phase="Running", Reason="", readiness=false. Elapsed: 2.007289301s
    Sep  2 11:51:52.363: INFO: Pod "downwardapi-volume-4d625c6a-e1fc-4c8e-ba60-43dc78966d30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011788609s
    STEP: Saw pod success
    Sep  2 11:51:52.363: INFO: Pod "downwardapi-volume-4d625c6a-e1fc-4c8e-ba60-43dc78966d30" satisfied condition "Succeeded or Failed"

    Sep  2 11:51:52.367: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-worker-k0ejuu pod downwardapi-volume-4d625c6a-e1fc-4c8e-ba60-43dc78966d30 container client-container: <nil>
    STEP: delete the pod
    Sep  2 11:51:52.391: INFO: Waiting for pod downwardapi-volume-4d625c6a-e1fc-4c8e-ba60-43dc78966d30 to disappear
    Sep  2 11:51:52.394: INFO: Pod downwardapi-volume-4d625c6a-e1fc-4c8e-ba60-43dc78966d30 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:51:52.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-6688" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":563,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] version v1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 39 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:51:52.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "proxy-8815" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":22,"skipped":387,"failed":1,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:51:56.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-7583" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":23,"skipped":425,"failed":1,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:51:56.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-1451" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":24,"skipped":441,"failed":1,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
    STEP: Destroying namespace "services-9227" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":-1,"completed":25,"skipped":445,"failed":1,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:51:56.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-1409" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":157,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 11:51:56.411: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-map-ff04df6c-50c0-421c-8394-7b39f7eb1068
    STEP: Creating a pod to test consume configMaps
    Sep  2 11:51:56.452: INFO: Waiting up to 5m0s for pod "pod-configmaps-f1061cf6-dfd6-4a0b-be0b-3b7924adbfbc" in namespace "configmap-1082" to be "Succeeded or Failed"

    Sep  2 11:51:56.455: INFO: Pod "pod-configmaps-f1061cf6-dfd6-4a0b-be0b-3b7924adbfbc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.240987ms
    Sep  2 11:51:58.459: INFO: Pod "pod-configmaps-f1061cf6-dfd6-4a0b-be0b-3b7924adbfbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007156124s
    Sep  2 11:52:00.463: INFO: Pod "pod-configmaps-f1061cf6-dfd6-4a0b-be0b-3b7924adbfbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011414314s
    STEP: Saw pod success
    Sep  2 11:52:00.463: INFO: Pod "pod-configmaps-f1061cf6-dfd6-4a0b-be0b-3b7924adbfbc" satisfied condition "Succeeded or Failed"

    Sep  2 11:52:00.467: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-j45dw pod pod-configmaps-f1061cf6-dfd6-4a0b-be0b-3b7924adbfbc container agnhost-container: <nil>
    STEP: delete the pod
    Sep  2 11:52:00.482: INFO: Waiting for pod pod-configmaps-f1061cf6-dfd6-4a0b-be0b-3b7924adbfbc to disappear
    Sep  2 11:52:00.485: INFO: Pod pod-configmaps-f1061cf6-dfd6-4a0b-be0b-3b7924adbfbc no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:52:00.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-1082" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":483,"failed":1,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
    STEP: Deploying the webhook pod
    STEP: Wait for the deployment to be ready
    Sep  2 11:51:57.094: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
    STEP: Deploying the webhook service
    STEP: Verifying the service has paired with the endpoint
    Sep  2 11:52:00.113: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
    [It] should unconditionally reject operations on fail closed webhook [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API

    STEP: create a namespace for the webhook
    STEP: create a configmap should be unconditionally rejected by the webhook
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:52:01.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "webhook-4692" for this suite.
    STEP: Destroying namespace "webhook-4692-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":16,"skipped":169,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Job
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:52:27.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "job-8763" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":33,"skipped":587,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-api-machinery] server version
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:52:27.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "server-version-2695" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":34,"skipped":589,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:53:01.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-probe-881" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":199,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir wrapper volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:53:03.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-wrapper-5600" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":18,"skipped":291,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:53:07.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-7700" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":309,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 11:53:07.972: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename container-runtime
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: create the container
    STEP: wait for the container to reach Failed

    STEP: get the container status
    STEP: the container should be terminated
    STEP: the termination message should be set
    Sep  2 11:53:11.031: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
    STEP: delete the container
    [AfterEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:53:11.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-2563" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":315,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 11:53:11.092: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-map-fa84febd-afa4-431f-800b-1e77b75917a6
    STEP: Creating a pod to test consume secrets
    Sep  2 11:53:11.149: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a111f8d0-5a44-4cc8-a49d-a9535b256506" in namespace "projected-7204" to be "Succeeded or Failed"

    Sep  2 11:53:11.154: INFO: Pod "pod-projected-secrets-a111f8d0-5a44-4cc8-a49d-a9535b256506": Phase="Pending", Reason="", readiness=false. Elapsed: 4.738739ms
    Sep  2 11:53:13.160: INFO: Pod "pod-projected-secrets-a111f8d0-5a44-4cc8-a49d-a9535b256506": Phase="Running", Reason="", readiness=false. Elapsed: 2.010912058s
    Sep  2 11:53:15.164: INFO: Pod "pod-projected-secrets-a111f8d0-5a44-4cc8-a49d-a9535b256506": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01540581s
    STEP: Saw pod success
    Sep  2 11:53:15.164: INFO: Pod "pod-projected-secrets-a111f8d0-5a44-4cc8-a49d-a9535b256506" satisfied condition "Succeeded or Failed"

    Sep  2 11:53:15.168: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-j45dw pod pod-projected-secrets-a111f8d0-5a44-4cc8-a49d-a9535b256506 container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep  2 11:53:15.188: INFO: Waiting for pod pod-projected-secrets-a111f8d0-5a44-4cc8-a49d-a9535b256506 to disappear
    Sep  2 11:53:15.192: INFO: Pod pod-projected-secrets-a111f8d0-5a44-4cc8-a49d-a9535b256506 no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:53:15.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-7204" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":332,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 11:53:15.339: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable via the environment [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating secret secrets-1866/secret-test-6cf0ade7-9118-423a-a495-de4cf0ec4aa0
    STEP: Creating a pod to test consume secrets
    Sep  2 11:53:15.399: INFO: Waiting up to 5m0s for pod "pod-configmaps-31cbdae3-d6d5-4f4e-a573-f28415729cee" in namespace "secrets-1866" to be "Succeeded or Failed"

    Sep  2 11:53:15.403: INFO: Pod "pod-configmaps-31cbdae3-d6d5-4f4e-a573-f28415729cee": Phase="Pending", Reason="", readiness=false. Elapsed: 3.899802ms
    Sep  2 11:53:17.407: INFO: Pod "pod-configmaps-31cbdae3-d6d5-4f4e-a573-f28415729cee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008394334s
    Sep  2 11:53:19.411: INFO: Pod "pod-configmaps-31cbdae3-d6d5-4f4e-a573-f28415729cee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012283145s
    STEP: Saw pod success
    Sep  2 11:53:19.411: INFO: Pod "pod-configmaps-31cbdae3-d6d5-4f4e-a573-f28415729cee" satisfied condition "Succeeded or Failed"

    Sep  2 11:53:19.414: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-worker-k0ejuu pod pod-configmaps-31cbdae3-d6d5-4f4e-a573-f28415729cee container env-test: <nil>
    STEP: delete the pod
    Sep  2 11:53:19.428: INFO: Waiting for pod pod-configmaps-31cbdae3-d6d5-4f4e-a573-f28415729cee to disappear
    Sep  2 11:53:19.431: INFO: Pod pod-configmaps-31cbdae3-d6d5-4f4e-a573-f28415729cee no longer exists
    [AfterEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:53:19.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-1866" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":403,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:53:22.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-8743" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":23,"skipped":408,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:53:28.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-9739" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":24,"skipped":421,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:53:28.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-2412" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":-1,"completed":25,"skipped":429,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 11:53:28.798: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name projected-secret-test-86209759-f629-4a0f-a58f-d154dae1b65d
    STEP: Creating a pod to test consume secrets
    Sep  2 11:53:28.845: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d6aa9799-e9ca-4b67-a0c2-7fd24c46c3c8" in namespace "projected-5895" to be "Succeeded or Failed"

    Sep  2 11:53:28.848: INFO: Pod "pod-projected-secrets-d6aa9799-e9ca-4b67-a0c2-7fd24c46c3c8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.050335ms
    Sep  2 11:53:30.853: INFO: Pod "pod-projected-secrets-d6aa9799-e9ca-4b67-a0c2-7fd24c46c3c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008002482s
    Sep  2 11:53:32.856: INFO: Pod "pod-projected-secrets-d6aa9799-e9ca-4b67-a0c2-7fd24c46c3c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011239157s
    STEP: Saw pod success
    Sep  2 11:53:32.856: INFO: Pod "pod-projected-secrets-d6aa9799-e9ca-4b67-a0c2-7fd24c46c3c8" satisfied condition "Succeeded or Failed"

    Sep  2 11:53:32.859: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-worker-k0ejuu pod pod-projected-secrets-d6aa9799-e9ca-4b67-a0c2-7fd24c46c3c8 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep  2 11:53:32.875: INFO: Waiting for pod pod-projected-secrets-d6aa9799-e9ca-4b67-a0c2-7fd24c46c3c8 to disappear
    Sep  2 11:53:32.877: INFO: Pod pod-projected-secrets-d6aa9799-e9ca-4b67-a0c2-7fd24c46c3c8 no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:53:32.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-5895" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":441,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 3 lines ...
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752
    [It] should serve multiport endpoints from pods  [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating service multi-endpoint-test in namespace services-9874
    STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9874 to expose endpoints map[]
    Sep  2 11:53:32.965: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found

    Sep  2 11:53:33.978: INFO: successfully validated that service multi-endpoint-test in namespace services-9874 exposes endpoints map[]
    STEP: Creating pod pod1 in namespace services-9874
    Sep  2 11:53:33.998: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true)
    Sep  2 11:53:36.002: INFO: The status of Pod pod1 is Running (Ready = true)
    STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9874 to expose endpoints map[pod1:[100]]
    Sep  2 11:53:36.015: INFO: successfully validated that service multi-endpoint-test in namespace services-9874 exposes endpoints map[pod1:[100]]
... skipping 28 lines ...
    STEP: Destroying namespace "services-9874" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":-1,"completed":27,"skipped":448,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 11:53:41.891: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  2 11:53:41.950: INFO: Waiting up to 5m0s for pod "downwardapi-volume-00f2fafa-2781-45c3-aa2c-338bdbe7a3b2" in namespace "downward-api-2409" to be "Succeeded or Failed"

    Sep  2 11:53:41.954: INFO: Pod "downwardapi-volume-00f2fafa-2781-45c3-aa2c-338bdbe7a3b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.170744ms
    Sep  2 11:53:43.959: INFO: Pod "downwardapi-volume-00f2fafa-2781-45c3-aa2c-338bdbe7a3b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008648499s
    Sep  2 11:53:45.965: INFO: Pod "downwardapi-volume-00f2fafa-2781-45c3-aa2c-338bdbe7a3b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014398584s
    STEP: Saw pod success
    Sep  2 11:53:45.965: INFO: Pod "downwardapi-volume-00f2fafa-2781-45c3-aa2c-338bdbe7a3b2" satisfied condition "Succeeded or Failed"

    Sep  2 11:53:45.968: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-j45dw pod downwardapi-volume-00f2fafa-2781-45c3-aa2c-338bdbe7a3b2 container client-container: <nil>
    STEP: delete the pod
    Sep  2 11:53:45.983: INFO: Waiting for pod downwardapi-volume-00f2fafa-2781-45c3-aa2c-338bdbe7a3b2 to disappear
    Sep  2 11:53:45.986: INFO: Pod downwardapi-volume-00f2fafa-2781-45c3-aa2c-338bdbe7a3b2 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:53:45.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-2409" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":448,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:53:48.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-6176" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":472,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 35 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:53:49.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-3172" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":30,"skipped":488,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
    
    STEP: creating a pod to probe DNS
    STEP: submitting the pod to kubernetes
    STEP: retrieving the pod
    STEP: looking for the results for each expected name from probers
    Sep  2 11:52:57.101: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5670.svc.cluster.local from pod dns-5670/dns-test-a35b53ae-d638-4e2b-87e2-1a6adde20ffe: the server is currently unable to handle the request (get pods dns-test-a35b53ae-d638-4e2b-87e2-1a6adde20ffe)
    Sep  2 11:54:24.052: FAIL: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5670.svc.cluster.local from pod dns-5670/dns-test-a35b53ae-d638-4e2b-87e2-1a6adde20ffe: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-5670/pods/dns-test-a35b53ae-d638-4e2b-87e2-1a6adde20ffe/proxy/results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5670.svc.cluster.local": context deadline exceeded

    
    Full Stack Trace
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x78de4a8, 0xc00019e010, 0x7f0111b44a68, 0x18, 0xc0042d9fc8)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x78de4a8, 0xc00019e010, 0xc0015e5a20, 0x2a14500, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f
... skipping 17 lines ...
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b
    testing.tRunner(0xc00003c480, 0x729a2d8)
    	/usr/local/go/src/testing/testing.go:1203 +0xe5
    created by testing.(*T).Run
    	/usr/local/go/src/testing/testing.go:1248 +0x2b3
    E0902 11:54:24.053495      17 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Sep  2 11:54:24.052: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5670.svc.cluster.local from pod dns-5670/dns-test-a35b53ae-d638-4e2b-87e2-1a6adde20ffe: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-5670/pods/dns-test-a35b53ae-d638-4e2b-87e2-1a6adde20ffe/proxy/results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5670.svc.cluster.local\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:217, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x78de4a8, 0xc00019e010, 0x7f0111b44a68, 0x18, 0xc0042d9fc8)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x78de4a8, 0xc00019e010, 0xc0015e5a20, 0x2a14500, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll(0x78de4a8, 0xc00019e010, 0xc0042d9f01, 0xc0042d9fc8, 0xc0015e5a20, 0x6826620, 0xc0015e5a20)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:577 +0xe5\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext(0x78de4a8, 0xc00019e010, 0x12a05f200, 0x8bb2c97000, 0xc0015e5a20, 0x6d6e4e0, 0x2521201)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:523 +0x66\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc003539960, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:509 +0x6f\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc003e2b400, 0xc, 0x10, 0x702fe9b, 0x7, 0xc003e5c800, 0x7971668, 0xc003ce2160, 0x0, 0x0, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463 +0x13c\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:457\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000e79b80, 0xc003e5c800, 0xc003e2b400, 0xc, 0x10)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:520 +0x365\nk8s.io/kubernetes/test/e2e/network.glob..func2.8()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:322 +0xb0f\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc00003c480)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc00003c480)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b\ntesting.tRunner(0xc00003c480, 0x729a2d8)\n\t/usr/local/go/src/testing/testing.go:1203 +0xe5\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1248 +0x2b3"} (
    Your test failed.

    Ginkgo panics to prevent subsequent assertions from running.
    Normally Ginkgo rescues this panic so you shouldn't see it.
    
    But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
    To circumvent this, you should call
    
... skipping 5 lines ...
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6bbe4c0, 0xc003f0a040)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
    panic(0x6bbe4c0, 0xc003f0a040)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc000c0cd00, 0x187, 0x88abe86, 0x7d, 0xd9, 0xc000289c00, 0xa8a)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
    panic(0x62ef260, 0x77956f0)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc000c0cd00, 0x187, 0xc003b315f8, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:260 +0xc8
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc000c0cd00, 0x187, 0xc003b316e0, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
    k8s.io/kubernetes/test/e2e/framework.Failf(0x70d3e4f, 0x24, 0xc003b31940, 0x4, 0x4)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
    k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:480 +0xab1
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x78de4a8, 0xc00019e010, 0x7f0111b44a68, 0x18, 0xc0042d9fc8)
... skipping 136 lines ...
    STEP: Destroying namespace "services-6422" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":31,"skipped":496,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:55:09.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-2144" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":32,"skipped":502,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:55:12.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-3979" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":33,"skipped":504,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:55:18.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-2168" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":34,"skipped":515,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 11:55:18.115: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0644 on node default medium
    Sep  2 11:55:18.151: INFO: Waiting up to 5m0s for pod "pod-556a5fbd-2533-43a1-aa0e-beac1ba7b00f" in namespace "emptydir-6909" to be "Succeeded or Failed"

    Sep  2 11:55:18.154: INFO: Pod "pod-556a5fbd-2533-43a1-aa0e-beac1ba7b00f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.115087ms
    Sep  2 11:55:20.158: INFO: Pod "pod-556a5fbd-2533-43a1-aa0e-beac1ba7b00f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007370983s
    Sep  2 11:55:22.163: INFO: Pod "pod-556a5fbd-2533-43a1-aa0e-beac1ba7b00f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012269616s
    STEP: Saw pod success
    Sep  2 11:55:22.163: INFO: Pod "pod-556a5fbd-2533-43a1-aa0e-beac1ba7b00f" satisfied condition "Succeeded or Failed"

    Sep  2 11:55:22.168: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-7zrh5 pod pod-556a5fbd-2533-43a1-aa0e-beac1ba7b00f container test-container: <nil>
    STEP: delete the pod
    Sep  2 11:55:22.197: INFO: Waiting for pod pod-556a5fbd-2533-43a1-aa0e-beac1ba7b00f to disappear
    Sep  2 11:55:22.201: INFO: Pod pod-556a5fbd-2533-43a1-aa0e-beac1ba7b00f no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:55:22.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-6909" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":518,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-network] EndpointSlice
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:55:22.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslice-8849" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":36,"skipped":521,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:55:28.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-3987" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":-1,"completed":37,"skipped":536,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 11:55:28.640: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename deployment
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 23 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:55:30.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-4971" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":38,"skipped":536,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:56:07.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-2253" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":-1,"completed":39,"skipped":640,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-instrumentation] Events API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:56:07.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-9671" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":40,"skipped":662,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:56:19.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-1598" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":41,"skipped":667,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 11:56:19.073: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable via the environment [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap configmap-2552/configmap-test-9d1a9f12-ea75-4a86-856b-d858d57b4464
    STEP: Creating a pod to test consume configMaps
    Sep  2 11:56:19.121: INFO: Waiting up to 5m0s for pod "pod-configmaps-6075a507-58b8-4cc4-8845-079fb5ad9592" in namespace "configmap-2552" to be "Succeeded or Failed"

    Sep  2 11:56:19.124: INFO: Pod "pod-configmaps-6075a507-58b8-4cc4-8845-079fb5ad9592": Phase="Pending", Reason="", readiness=false. Elapsed: 3.112049ms
    Sep  2 11:56:21.129: INFO: Pod "pod-configmaps-6075a507-58b8-4cc4-8845-079fb5ad9592": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007878246s
    Sep  2 11:56:23.134: INFO: Pod "pod-configmaps-6075a507-58b8-4cc4-8845-079fb5ad9592": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01307349s
    STEP: Saw pod success
    Sep  2 11:56:23.134: INFO: Pod "pod-configmaps-6075a507-58b8-4cc4-8845-079fb5ad9592" satisfied condition "Succeeded or Failed"

    Sep  2 11:56:23.137: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-j45dw pod pod-configmaps-6075a507-58b8-4cc4-8845-079fb5ad9592 container env-test: <nil>
    STEP: delete the pod
    Sep  2 11:56:23.164: INFO: Waiting for pod pod-configmaps-6075a507-58b8-4cc4-8845-079fb5ad9592 to disappear
    Sep  2 11:56:23.166: INFO: Pod pod-configmaps-6075a507-58b8-4cc4-8845-079fb5ad9592 no longer exists
    [AfterEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:56:23.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-2552" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":701,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 40 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:57:33.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-7843" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":43,"skipped":705,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-network] EndpointSlice
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:57:33.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslice-6359" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":44,"skipped":708,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:57:37.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-6819" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":45,"skipped":785,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 282 lines ...
      ----    ------     ----  ----               -------
      Normal  Scheduled  32s   default-scheduler  Successfully assigned pod-network-test-1215/netserver-3 to k8s-upgrade-and-conformance-rxa2hz-worker-k0ejuu
      Normal  Pulled     31s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine
      Normal  Created    31s   kubelet            Created container webserver
      Normal  Started    31s   kubelet            Started container webserver
    
    Sep  2 11:52:32.284: INFO: encountered error during dial (did not find expected responses... 

    Tries 1
    Command curl -g -q -s 'http://192.168.0.48:9080/dial?request=hostname&protocol=http&host=192.168.2.21&port=8083&tries=1'
    retrieved map[]
    expected map[netserver-2:{}])
    Sep  2 11:52:32.284: INFO: ...failed...will try again in next pass

    Sep  2 11:52:32.284: INFO: Breadth first check of 192.168.6.36 on host 172.18.0.6...
    Sep  2 11:52:32.288: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.48:9080/dial?request=hostname&protocol=http&host=192.168.6.36&port=8083&tries=1'] Namespace:pod-network-test-1215 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 11:52:32.288: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 11:52:32.387: INFO: Waiting for responses: map[]
    Sep  2 11:52:32.387: INFO: reached 192.168.6.36 after 0/1 tries
    Sep  2 11:52:32.387: INFO: Going to retry 1 out of 4 pods....
... skipping 382 lines ...
      ----    ------     ----   ----               -------
      Normal  Scheduled  5m59s  default-scheduler  Successfully assigned pod-network-test-1215/netserver-3 to k8s-upgrade-and-conformance-rxa2hz-worker-k0ejuu
      Normal  Pulled     5m58s  kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine
      Normal  Created    5m58s  kubelet            Created container webserver
      Normal  Started    5m58s  kubelet            Started container webserver
    
    Sep  2 11:57:59.076: INFO: encountered error during dial (did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.0.48:9080/dial?request=hostname&protocol=http&host=192.168.2.21&port=8083&tries=1'
    retrieved map[]
    expected map[netserver-2:{}])
    Sep  2 11:57:59.076: INFO: ... Done probing pod [[[ 192.168.2.21 ]]]
    Sep  2 11:57:59.076: INFO: succeeded at polling 3 out of 4 connections
    Sep  2 11:57:59.076: INFO: pod polling failure summary:
    Sep  2 11:57:59.076: INFO: Collected error: did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.0.48:9080/dial?request=hostname&protocol=http&host=192.168.2.21&port=8083&tries=1'
    retrieved map[]
    expected map[netserver-2:{}]
    Sep  2 11:57:59.077: FAIL: failed,  1 out of 4 connections failed

    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/common/network.glob..func1.1.2()
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82 +0x69
    k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0009f8a80)
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 14 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
      Granular Checks: Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
        should function for intra-pod communication: http [NodeConformance] [Conformance] [It]
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
        Sep  2 11:57:59.077: failed,  1 out of 4 connections failed

    
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:57:59.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-probe-7583" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":46,"skipped":808,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 101 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:59:11.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-5481" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":-1,"completed":47,"skipped":844,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  2 11:59:11.743: INFO: Waiting up to 5m0s for pod "downwardapi-volume-45bf6720-b128-4172-9bfe-66a7c0d98d8e" in namespace "projected-9535" to be "Succeeded or Failed"

    Sep  2 11:59:11.746: INFO: Pod "downwardapi-volume-45bf6720-b128-4172-9bfe-66a7c0d98d8e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.033921ms
    Sep  2 11:59:13.750: INFO: Pod "downwardapi-volume-45bf6720-b128-4172-9bfe-66a7c0d98d8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007555609s
    Sep  2 11:59:15.755: INFO: Pod "downwardapi-volume-45bf6720-b128-4172-9bfe-66a7c0d98d8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011702345s
    STEP: Saw pod success
    Sep  2 11:59:15.755: INFO: Pod "downwardapi-volume-45bf6720-b128-4172-9bfe-66a7c0d98d8e" satisfied condition "Succeeded or Failed"

    Sep  2 11:59:15.758: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-7zrh5 pod downwardapi-volume-45bf6720-b128-4172-9bfe-66a7c0d98d8e container client-container: <nil>
    STEP: delete the pod
    Sep  2 11:59:15.778: INFO: Waiting for pod downwardapi-volume-45bf6720-b128-4172-9bfe-66a7c0d98d8e to disappear
    Sep  2 11:59:15.781: INFO: Pod downwardapi-volume-45bf6720-b128-4172-9bfe-66a7c0d98d8e no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:59:15.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-9535" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":48,"skipped":862,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 3 lines ...
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188
    [It] should contain environment variables for services [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep  2 11:59:15.839: INFO: The status of Pod server-envvars-d2cad328-8b70-4bf6-8479-09b4f89d1a23 is Pending, waiting for it to be Running (with Ready = true)
    Sep  2 11:59:17.844: INFO: The status of Pod server-envvars-d2cad328-8b70-4bf6-8479-09b4f89d1a23 is Running (Ready = true)
    Sep  2 11:59:17.864: INFO: Waiting up to 5m0s for pod "client-envvars-919ac890-8770-47f2-af1d-ad1ad8c69ccd" in namespace "pods-2205" to be "Succeeded or Failed"

    Sep  2 11:59:17.869: INFO: Pod "client-envvars-919ac890-8770-47f2-af1d-ad1ad8c69ccd": Phase="Pending", Reason="", readiness=false. Elapsed: 5.36607ms
    Sep  2 11:59:19.875: INFO: Pod "client-envvars-919ac890-8770-47f2-af1d-ad1ad8c69ccd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011080026s
    Sep  2 11:59:21.880: INFO: Pod "client-envvars-919ac890-8770-47f2-af1d-ad1ad8c69ccd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016676089s
    STEP: Saw pod success
    Sep  2 11:59:21.880: INFO: Pod "client-envvars-919ac890-8770-47f2-af1d-ad1ad8c69ccd" satisfied condition "Succeeded or Failed"

    Sep  2 11:59:21.883: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-worker-k0ejuu pod client-envvars-919ac890-8770-47f2-af1d-ad1ad8c69ccd container env3cont: <nil>
    STEP: delete the pod
    Sep  2 11:59:21.910: INFO: Waiting for pod client-envvars-919ac890-8770-47f2-af1d-ad1ad8c69ccd to disappear
    Sep  2 11:59:21.913: INFO: Pod client-envvars-919ac890-8770-47f2-af1d-ad1ad8c69ccd no longer exists
    [AfterEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:59:21.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-2205" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":49,"skipped":865,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 11:59:22.040: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0666 on node default medium
    Sep  2 11:59:22.076: INFO: Waiting up to 5m0s for pod "pod-779a517d-eb65-485e-ad65-bb45e778e577" in namespace "emptydir-1479" to be "Succeeded or Failed"

    Sep  2 11:59:22.078: INFO: Pod "pod-779a517d-eb65-485e-ad65-bb45e778e577": Phase="Pending", Reason="", readiness=false. Elapsed: 2.668046ms
    Sep  2 11:59:24.084: INFO: Pod "pod-779a517d-eb65-485e-ad65-bb45e778e577": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007942623s
    Sep  2 11:59:26.089: INFO: Pod "pod-779a517d-eb65-485e-ad65-bb45e778e577": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013155241s
    STEP: Saw pod success
    Sep  2 11:59:26.089: INFO: Pod "pod-779a517d-eb65-485e-ad65-bb45e778e577" satisfied condition "Succeeded or Failed"

    Sep  2 11:59:26.092: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-worker-k0ejuu pod pod-779a517d-eb65-485e-ad65-bb45e778e577 container test-container: <nil>
    STEP: delete the pod
    Sep  2 11:59:26.108: INFO: Waiting for pod pod-779a517d-eb65-485e-ad65-bb45e778e577 to disappear
    Sep  2 11:59:26.110: INFO: Pod pod-779a517d-eb65-485e-ad65-bb45e778e577 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:59:26.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-1479" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":50,"skipped":948,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 11:59:26.132: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should fail to create secret due to empty secret key [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name secret-emptykey-test-bbbe985d-9586-4bca-89d5-41f733561312
    [AfterEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:59:26.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-7171" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":51,"skipped":956,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":18,"skipped":274,"failed":1,"failures":["[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 11:54:24.103: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename dns
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 6 lines ...
    
    STEP: creating a pod to probe DNS
    STEP: submitting the pod to kubernetes
    STEP: retrieving the pod
    STEP: looking for the results for each expected name from probers
    Sep  2 11:58:02.257: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1675.svc.cluster.local from pod dns-1675/dns-test-cd2dda7e-f6c4-4a6e-b69c-14b7e0722918: the server is currently unable to handle the request (get pods dns-test-cd2dda7e-f6c4-4a6e-b69c-14b7e0722918)
    Sep  2 11:59:28.180: FAIL: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1675.svc.cluster.local from pod dns-1675/dns-test-cd2dda7e-f6c4-4a6e-b69c-14b7e0722918: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-1675/pods/dns-test-cd2dda7e-f6c4-4a6e-b69c-14b7e0722918/proxy/results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1675.svc.cluster.local": context deadline exceeded

    
    Full Stack Trace
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x78de4a8, 0xc00019e010, 0x7f0111b445b8, 0x18, 0xc00302ae58)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x78de4a8, 0xc00019e010, 0xc002e86a60, 0x2a14500, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f
... skipping 17 lines ...
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b
    testing.tRunner(0xc00003c480, 0x729a2d8)
    	/usr/local/go/src/testing/testing.go:1203 +0xe5
    created by testing.(*T).Run
    	/usr/local/go/src/testing/testing.go:1248 +0x2b3
    E0902 11:59:28.181278      17 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Sep  2 11:59:28.180: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1675.svc.cluster.local from pod dns-1675/dns-test-cd2dda7e-f6c4-4a6e-b69c-14b7e0722918: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-1675/pods/dns-test-cd2dda7e-f6c4-4a6e-b69c-14b7e0722918/proxy/results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1675.svc.cluster.local\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:217, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x78de4a8, 0xc00019e010, 0x7f0111b445b8, 0x18, 0xc00302ae58)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x78de4a8, 0xc00019e010, 0xc002e86a60, 0x2a14500, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll(0x78de4a8, 0xc00019e010, 0xc00302ae01, 0xc00302ae58, 0xc002e86a60, 0x6826620, 0xc002e86a60)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:577 +0xe5\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext(0x78de4a8, 0xc00019e010, 0x12a05f200, 0x8bb2c97000, 0xc002e86a60, 0x6d6e4e0, 0x2521201)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:523 +0x66\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc0017facb0, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:509 +0x6f\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc002610100, 0xc, 0x10, 0x702fe9b, 0x7, 0xc000501000, 0x7971668, 0xc004da69a0, 0x0, 0x0, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463 +0x13c\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:457\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000e79b80, 0xc000501000, 0xc002610100, 0xc, 0x10)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:520 +0x365\nk8s.io/kubernetes/test/e2e/network.glob..func2.8()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:322 +0xb0f\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc00003c480)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc00003c480)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b\ntesting.tRunner(0xc00003c480, 0x729a2d8)\n\t/usr/local/go/src/testing/testing.go:1203 +0xe5\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1248 +0x2b3"} (
    Your test failed.

    Ginkgo panics to prevent subsequent assertions from running.
    Normally Ginkgo rescues this panic so you shouldn't see it.
    
    But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
    To circumvent this, you should call
    
... skipping 5 lines ...
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6bbe4c0, 0xc003378100)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
    panic(0x6bbe4c0, 0xc003378100)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc000c0cd00, 0x187, 0x88abe86, 0x7d, 0xd9, 0xc000289c00, 0xa8a)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
    panic(0x62ef260, 0x77956f0)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc000c0cd00, 0x187, 0xc003b315f8, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:260 +0xc8
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc000c0cd00, 0x187, 0xc003b316e0, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
    k8s.io/kubernetes/test/e2e/framework.Failf(0x70d3e4f, 0x24, 0xc003b31940, 0x4, 0x4)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
    k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:480 +0xab1
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x78de4a8, 0xc00019e010, 0x7f0111b445b8, 0x18, 0xc00302ae58)
... skipping 81 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:59:37.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-1299" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":52,"skipped":988,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 34 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:59:47.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-6884" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":53,"skipped":1001,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
    STEP: Destroying namespace "webhook-2490-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":54,"skipped":1017,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 11:59:57.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-2142" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":55,"skipped":1070,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:00:02.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "init-container-1360" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":56,"skipped":1083,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:00:06.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-8367" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":57,"skipped":1085,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:00:06.343: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should allow substituting values in a container's args [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test substitution in container's args
    Sep  2 12:00:06.381: INFO: Waiting up to 5m0s for pod "var-expansion-d53a690c-b96c-41bd-b1e1-21a24eb748a5" in namespace "var-expansion-5936" to be "Succeeded or Failed"

    Sep  2 12:00:06.385: INFO: Pod "var-expansion-d53a690c-b96c-41bd-b1e1-21a24eb748a5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.801766ms
    Sep  2 12:00:08.389: INFO: Pod "var-expansion-d53a690c-b96c-41bd-b1e1-21a24eb748a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008319654s
    Sep  2 12:00:10.394: INFO: Pod "var-expansion-d53a690c-b96c-41bd-b1e1-21a24eb748a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013270561s
    STEP: Saw pod success
    Sep  2 12:00:10.394: INFO: Pod "var-expansion-d53a690c-b96c-41bd-b1e1-21a24eb748a5" satisfied condition "Succeeded or Failed"

    Sep  2 12:00:10.397: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-worker-cznwre pod var-expansion-d53a690c-b96c-41bd-b1e1-21a24eb748a5 container dapi-container: <nil>
    STEP: delete the pod
    Sep  2 12:00:10.418: INFO: Waiting for pod var-expansion-d53a690c-b96c-41bd-b1e1-21a24eb748a5 to disappear
    Sep  2 12:00:10.421: INFO: Pod var-expansion-d53a690c-b96c-41bd-b1e1-21a24eb748a5 no longer exists
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:00:10.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-5936" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":58,"skipped":1098,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 22 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:00:18.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-lifecycle-hook-1776" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":59,"skipped":1102,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:00:18.595: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0777 on node default medium
    Sep  2 12:00:18.634: INFO: Waiting up to 5m0s for pod "pod-543f7f1f-d18e-45c3-9f55-f6b7c436d880" in namespace "emptydir-435" to be "Succeeded or Failed"

    Sep  2 12:00:18.637: INFO: Pod "pod-543f7f1f-d18e-45c3-9f55-f6b7c436d880": Phase="Pending", Reason="", readiness=false. Elapsed: 3.141027ms
    Sep  2 12:00:20.643: INFO: Pod "pod-543f7f1f-d18e-45c3-9f55-f6b7c436d880": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008598465s
    Sep  2 12:00:22.648: INFO: Pod "pod-543f7f1f-d18e-45c3-9f55-f6b7c436d880": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013711976s
    STEP: Saw pod success
    Sep  2 12:00:22.648: INFO: Pod "pod-543f7f1f-d18e-45c3-9f55-f6b7c436d880" satisfied condition "Succeeded or Failed"

    Sep  2 12:00:22.651: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-worker-k0ejuu pod pod-543f7f1f-d18e-45c3-9f55-f6b7c436d880 container test-container: <nil>
    STEP: delete the pod
    Sep  2 12:00:22.666: INFO: Waiting for pod pod-543f7f1f-d18e-45c3-9f55-f6b7c436d880 to disappear
    Sep  2 12:00:22.670: INFO: Pod pod-543f7f1f-d18e-45c3-9f55-f6b7c436d880 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:00:22.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-435" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":60,"skipped":1139,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:00:22.683: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test env composition
    Sep  2 12:00:22.736: INFO: Waiting up to 5m0s for pod "var-expansion-72f71e4b-bda6-4f99-8a17-f6e49ea54971" in namespace "var-expansion-417" to be "Succeeded or Failed"

    Sep  2 12:00:22.740: INFO: Pod "var-expansion-72f71e4b-bda6-4f99-8a17-f6e49ea54971": Phase="Pending", Reason="", readiness=false. Elapsed: 3.804614ms
    Sep  2 12:00:24.745: INFO: Pod "var-expansion-72f71e4b-bda6-4f99-8a17-f6e49ea54971": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008739584s
    Sep  2 12:00:26.749: INFO: Pod "var-expansion-72f71e4b-bda6-4f99-8a17-f6e49ea54971": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013211095s
    STEP: Saw pod success
    Sep  2 12:00:26.749: INFO: Pod "var-expansion-72f71e4b-bda6-4f99-8a17-f6e49ea54971" satisfied condition "Succeeded or Failed"

    Sep  2 12:00:26.753: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-worker-k0ejuu pod var-expansion-72f71e4b-bda6-4f99-8a17-f6e49ea54971 container dapi-container: <nil>
    STEP: delete the pod
    Sep  2 12:00:26.768: INFO: Waiting for pod var-expansion-72f71e4b-bda6-4f99-8a17-f6e49ea54971 to disappear
    Sep  2 12:00:26.772: INFO: Pod var-expansion-72f71e4b-bda6-4f99-8a17-f6e49ea54971 no longer exists
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:00:26.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-417" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":61,"skipped":1140,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:00:27.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-7783" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":-1,"completed":62,"skipped":1163,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:00:38.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-3654" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":63,"skipped":1231,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
    STEP: Destroying namespace "services-9173" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":64,"skipped":1300,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:00:45.595: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-aa33e83d-8c4b-47ff-9109-59dc2c73e0fa
    STEP: Creating a pod to test consume secrets
    Sep  2 12:00:45.640: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d200e7da-bd4c-4a51-8573-f32e72e9c4c0" in namespace "projected-39" to be "Succeeded or Failed"

    Sep  2 12:00:45.643: INFO: Pod "pod-projected-secrets-d200e7da-bd4c-4a51-8573-f32e72e9c4c0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.222314ms
    Sep  2 12:00:47.649: INFO: Pod "pod-projected-secrets-d200e7da-bd4c-4a51-8573-f32e72e9c4c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008818129s
    Sep  2 12:00:49.655: INFO: Pod "pod-projected-secrets-d200e7da-bd4c-4a51-8573-f32e72e9c4c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014320434s
    STEP: Saw pod success
    Sep  2 12:00:49.655: INFO: Pod "pod-projected-secrets-d200e7da-bd4c-4a51-8573-f32e72e9c4c0" satisfied condition "Succeeded or Failed"

    Sep  2 12:00:49.658: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-worker-k0ejuu pod pod-projected-secrets-d200e7da-bd4c-4a51-8573-f32e72e9c4c0 container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep  2 12:00:49.679: INFO: Waiting for pod pod-projected-secrets-d200e7da-bd4c-4a51-8573-f32e72e9c4c0 to disappear
    Sep  2 12:00:49.682: INFO: Pod pod-projected-secrets-d200e7da-bd4c-4a51-8573-f32e72e9c4c0 no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:00:49.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-39" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":65,"skipped":1301,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:00:49.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "sysctl-8833" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":66,"skipped":1317,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  2 12:00:49.903: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b15af9c0-ae6f-4232-b903-fdc179df60ba" in namespace "downward-api-720" to be "Succeeded or Failed"

    Sep  2 12:00:49.913: INFO: Pod "downwardapi-volume-b15af9c0-ae6f-4232-b903-fdc179df60ba": Phase="Pending", Reason="", readiness=false. Elapsed: 10.217998ms
    Sep  2 12:00:51.919: INFO: Pod "downwardapi-volume-b15af9c0-ae6f-4232-b903-fdc179df60ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016012377s
    Sep  2 12:00:53.924: INFO: Pod "downwardapi-volume-b15af9c0-ae6f-4232-b903-fdc179df60ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021392317s
    STEP: Saw pod success
    Sep  2 12:00:53.924: INFO: Pod "downwardapi-volume-b15af9c0-ae6f-4232-b903-fdc179df60ba" satisfied condition "Succeeded or Failed"

    Sep  2 12:00:53.928: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-7zrh5 pod downwardapi-volume-b15af9c0-ae6f-4232-b903-fdc179df60ba container client-container: <nil>
    STEP: delete the pod
    Sep  2 12:00:53.951: INFO: Waiting for pod downwardapi-volume-b15af9c0-ae6f-4232-b903-fdc179df60ba to disappear
    Sep  2 12:00:53.954: INFO: Pod downwardapi-volume-b15af9c0-ae6f-4232-b903-fdc179df60ba no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:00:53.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-720" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":67,"skipped":1346,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-scheduling] LimitRange
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 32 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:01:01.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "limitrange-427" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":68,"skipped":1354,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:01:23.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-probe-1717" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":69,"skipped":1370,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:01:24.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-7202" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":-1,"completed":70,"skipped":1409,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:01:24.398: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-map-811d2919-a29c-4b90-ba1c-102bca7c315c
    STEP: Creating a pod to test consume configMaps
    Sep  2 12:01:24.439: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-30bcc613-6e4f-450a-b101-0a5e3422e8e6" in namespace "projected-5700" to be "Succeeded or Failed"

    Sep  2 12:01:24.441: INFO: Pod "pod-projected-configmaps-30bcc613-6e4f-450a-b101-0a5e3422e8e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.546405ms
    Sep  2 12:01:26.445: INFO: Pod "pod-projected-configmaps-30bcc613-6e4f-450a-b101-0a5e3422e8e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00667737s
    Sep  2 12:01:28.450: INFO: Pod "pod-projected-configmaps-30bcc613-6e4f-450a-b101-0a5e3422e8e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010780813s
    STEP: Saw pod success
    Sep  2 12:01:28.450: INFO: Pod "pod-projected-configmaps-30bcc613-6e4f-450a-b101-0a5e3422e8e6" satisfied condition "Succeeded or Failed"

    Sep  2 12:01:28.453: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-worker-k0ejuu pod pod-projected-configmaps-30bcc613-6e4f-450a-b101-0a5e3422e8e6 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  2 12:01:28.468: INFO: Waiting for pod pod-projected-configmaps-30bcc613-6e4f-450a-b101-0a5e3422e8e6 to disappear
    Sep  2 12:01:28.470: INFO: Pod pod-projected-configmaps-30bcc613-6e4f-450a-b101-0a5e3422e8e6 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:01:28.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-5700" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":71,"skipped":1471,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 26 lines ...
    STEP: Destroying namespace "webhook-3371-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":72,"skipped":1481,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
    STEP: Destroying namespace "webhook-932-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":73,"skipped":1484,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] CronJob
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:03:01.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "cronjob-7118" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":74,"skipped":1501,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:03:08.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-3292" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":75,"skipped":1513,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:03:08.067: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep  2 12:03:08.152: INFO: Waiting up to 5m0s for pod "downward-api-7660a42f-f1f9-4a30-85ab-0ceb5d89feec" in namespace "downward-api-2896" to be "Succeeded or Failed"

    Sep  2 12:03:08.157: INFO: Pod "downward-api-7660a42f-f1f9-4a30-85ab-0ceb5d89feec": Phase="Pending", Reason="", readiness=false. Elapsed: 5.580814ms
    Sep  2 12:03:10.164: INFO: Pod "downward-api-7660a42f-f1f9-4a30-85ab-0ceb5d89feec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012524961s
    Sep  2 12:03:12.170: INFO: Pod "downward-api-7660a42f-f1f9-4a30-85ab-0ceb5d89feec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01813475s
    STEP: Saw pod success
    Sep  2 12:03:12.170: INFO: Pod "downward-api-7660a42f-f1f9-4a30-85ab-0ceb5d89feec" satisfied condition "Succeeded or Failed"

    Sep  2 12:03:12.176: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-7zrh5 pod downward-api-7660a42f-f1f9-4a30-85ab-0ceb5d89feec container dapi-container: <nil>
    STEP: delete the pod
    Sep  2 12:03:12.226: INFO: Waiting for pod downward-api-7660a42f-f1f9-4a30-85ab-0ceb5d89feec to disappear
    Sep  2 12:03:12.232: INFO: Pod downward-api-7660a42f-f1f9-4a30-85ab-0ceb5d89feec no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:03:12.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-2896" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":76,"skipped":1518,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-instrumentation] Events
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:03:12.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-2457" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":77,"skipped":1528,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:03:14.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-3662" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":78,"skipped":1571,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:03:17.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-6630" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":79,"skipped":1576,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide container's memory limit [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  2 12:03:17.180: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dbf33b08-6160-43bd-9a8b-4513ece81d0a" in namespace "projected-251" to be "Succeeded or Failed"

    Sep  2 12:03:17.185: INFO: Pod "downwardapi-volume-dbf33b08-6160-43bd-9a8b-4513ece81d0a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.940609ms
    Sep  2 12:03:19.193: INFO: Pod "downwardapi-volume-dbf33b08-6160-43bd-9a8b-4513ece81d0a": Phase="Running", Reason="", readiness=false. Elapsed: 2.012437464s
    Sep  2 12:03:21.200: INFO: Pod "downwardapi-volume-dbf33b08-6160-43bd-9a8b-4513ece81d0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019677988s
    STEP: Saw pod success
    Sep  2 12:03:21.200: INFO: Pod "downwardapi-volume-dbf33b08-6160-43bd-9a8b-4513ece81d0a" satisfied condition "Succeeded or Failed"

    Sep  2 12:03:21.205: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-worker-k0ejuu pod downwardapi-volume-dbf33b08-6160-43bd-9a8b-4513ece81d0a container client-container: <nil>
    STEP: delete the pod
    Sep  2 12:03:21.230: INFO: Waiting for pod downwardapi-volume-dbf33b08-6160-43bd-9a8b-4513ece81d0a to disappear
    Sep  2 12:03:21.236: INFO: Pod downwardapi-volume-dbf33b08-6160-43bd-9a8b-4513ece81d0a no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:03:21.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-251" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":80,"skipped":1615,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":510,"failed":2,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 11:57:59.090: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename pod-network-test
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 274 lines ...
      ----    ------     ----  ----               -------
      Normal  Scheduled  22s   default-scheduler  Successfully assigned pod-network-test-4379/netserver-3 to k8s-upgrade-and-conformance-rxa2hz-worker-k0ejuu
      Normal  Pulled     22s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine
      Normal  Created    22s   kubelet            Created container webserver
      Normal  Started    22s   kubelet            Started container webserver
    
    Sep  2 11:58:21.028: INFO: encountered error during dial (did not find expected responses... 

    Tries 1
    Command curl -g -q -s 'http://192.168.1.41:9080/dial?request=hostname&protocol=http&host=192.168.2.27&port=8083&tries=1'
    retrieved map[]
    expected map[netserver-2:{}])
    Sep  2 11:58:21.028: INFO: ...failed...will try again in next pass

    Sep  2 11:58:21.028: INFO: Breadth first check of 192.168.6.52 on host 172.18.0.6...
    Sep  2 11:58:21.031: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.41:9080/dial?request=hostname&protocol=http&host=192.168.6.52&port=8083&tries=1'] Namespace:pod-network-test-4379 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 11:58:21.031: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 11:58:21.124: INFO: Waiting for responses: map[]
    Sep  2 11:58:21.124: INFO: reached 192.168.6.52 after 0/1 tries
    Sep  2 11:58:21.124: INFO: Going to retry 1 out of 4 pods....
... skipping 382 lines ...
      ----    ------     ----   ----               -------
      Normal  Scheduled  5m49s  default-scheduler  Successfully assigned pod-network-test-4379/netserver-3 to k8s-upgrade-and-conformance-rxa2hz-worker-k0ejuu
      Normal  Pulled     5m49s  kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine
      Normal  Created    5m49s  kubelet            Created container webserver
      Normal  Started    5m49s  kubelet            Started container webserver
    
    Sep  2 12:03:48.686: INFO: encountered error during dial (did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.1.41:9080/dial?request=hostname&protocol=http&host=192.168.2.27&port=8083&tries=1'
    retrieved map[]
    expected map[netserver-2:{}])
    Sep  2 12:03:48.686: INFO: ... Done probing pod [[[ 192.168.2.27 ]]]
    Sep  2 12:03:48.686: INFO: succeeded at polling 3 out of 4 connections
    Sep  2 12:03:48.686: INFO: pod polling failure summary:
    Sep  2 12:03:48.686: INFO: Collected error: did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.1.41:9080/dial?request=hostname&protocol=http&host=192.168.2.27&port=8083&tries=1'
    retrieved map[]
    expected map[netserver-2:{}]
    Sep  2 12:03:48.686: FAIL: failed,  1 out of 4 connections failed

    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/common/network.glob..func1.1.2()
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82 +0x69
    k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0009f8a80)
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 14 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
      Granular Checks: Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
        should function for intra-pod communication: http [NodeConformance] [Conformance] [It]
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
        Sep  2 12:03:48.686: failed,  1 out of 4 connections failed

    
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
    STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
    Sep  2 12:03:35.453: INFO: Waiting for webhook configuration to be ready...
    Sep  2 12:03:45.572: INFO: Waiting for webhook configuration to be ready...
    Sep  2 12:03:55.677: INFO: Waiting for webhook configuration to be ready...
    Sep  2 12:04:05.770: INFO: Waiting for webhook configuration to be ready...
    Sep  2 12:04:15.791: INFO: Waiting for webhook configuration to be ready...
    Sep  2 12:04:15.791: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc000248290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 23 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  2 12:04:15.791: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc000248290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1361
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":80,"skipped":1634,"failed":3,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:04:15.961: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 22 lines ...
    STEP: Destroying namespace "webhook-2643-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":81,"skipped":1634,"failed":3,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:04:23.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-4246" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":82,"skipped":1642,"failed":3,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":18,"skipped":274,"failed":2,"failures":["[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 11:59:28.227: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename dns
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 6 lines ...
    
    STEP: creating a pod to probe DNS
    STEP: submitting the pod to kubernetes
    STEP: retrieving the pod
    STEP: looking for the results for each expected name from probers
    Sep  2 12:03:05.357: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3594.svc.cluster.local from pod dns-3594/dns-test-ae42cebc-996d-4b11-a687-4a6359889bf4: the server is currently unable to handle the request (get pods dns-test-ae42cebc-996d-4b11-a687-4a6359889bf4)
    Sep  2 12:04:32.291: FAIL: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3594.svc.cluster.local from pod dns-3594/dns-test-ae42cebc-996d-4b11-a687-4a6359889bf4: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-3594/pods/dns-test-ae42cebc-996d-4b11-a687-4a6359889bf4/proxy/results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3594.svc.cluster.local": context deadline exceeded

    
    Full Stack Trace
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x78de4a8, 0xc00019e010, 0x7f0111b44f18, 0x18, 0xc002e661b0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x78de4a8, 0xc00019e010, 0xc00067c830, 0x2a14500, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f
... skipping 17 lines ...
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b
    testing.tRunner(0xc00003c480, 0x729a2d8)
    	/usr/local/go/src/testing/testing.go:1203 +0xe5
    created by testing.(*T).Run
    	/usr/local/go/src/testing/testing.go:1248 +0x2b3
    E0902 12:04:32.293357      17 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Sep  2 12:04:32.291: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3594.svc.cluster.local from pod dns-3594/dns-test-ae42cebc-996d-4b11-a687-4a6359889bf4: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-3594/pods/dns-test-ae42cebc-996d-4b11-a687-4a6359889bf4/proxy/results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3594.svc.cluster.local\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:217, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x78de4a8, 0xc00019e010, 0x7f0111b44f18, 0x18, 0xc002e661b0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x78de4a8, 0xc00019e010, 0xc00067c830, 0x2a14500, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll(0x78de4a8, 0xc00019e010, 0xc002e66101, 0xc002e661b0, 0xc00067c830, 0x6826620, 0xc00067c830)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:577 +0xe5\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext(0x78de4a8, 0xc00019e010, 0x12a05f200, 0x8bb2c97000, 0xc00067c830, 0x6d6e4e0, 0x2521201)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:523 +0x66\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc003288380, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:509 +0x6f\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc0023c7500, 0xc, 0x10, 0x702fe9b, 0x7, 0xc0043a5800, 0x7971668, 0xc0047d5ce0, 0x0, 0x0, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463 +0x13c\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:457\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000e79b80, 0xc0043a5800, 0xc0023c7500, 0xc, 0x10)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:520 +0x365\nk8s.io/kubernetes/test/e2e/network.glob..func2.8()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:322 +0xb0f\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc00003c480)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc00003c480)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b\ntesting.tRunner(0xc00003c480, 0x729a2d8)\n\t/usr/local/go/src/testing/testing.go:1203 +0xe5\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1248 +0x2b3"} (
    Your test failed.

    Ginkgo panics to prevent subsequent assertions from running.
    Normally Ginkgo rescues this panic so you shouldn't see it.
    
    But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
    To circumvent this, you should call
    
... skipping 5 lines ...
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6bbe4c0, 0xc00385a0c0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
    panic(0x6bbe4c0, 0xc00385a0c0)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc000c0cd00, 0x187, 0x88abe86, 0x7d, 0xd9, 0xc000289c00, 0xa8a)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
    panic(0x62ef260, 0x77956f0)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc000c0cd00, 0x187, 0xc003b315f8, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:260 +0xc8
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc000c0cd00, 0x187, 0xc003b316e0, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
    k8s.io/kubernetes/test/e2e/framework.Failf(0x70d3e4f, 0x24, 0xc003b31940, 0x4, 0x4)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
    k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:480 +0xab1
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x78de4a8, 0xc00019e010, 0x7f0111b44f18, 0x18, 0xc002e661b0)
... skipping 59 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  2 12:04:32.291: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3594.svc.cluster.local from pod dns-3594/dns-test-ae42cebc-996d-4b11-a687-4a6359889bf4: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-3594/pods/dns-test-ae42cebc-996d-4b11-a687-4a6359889bf4/proxy/results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3594.svc.cluster.local": context deadline exceeded
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217
    ------------------------------
    {"msg":"FAILED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":18,"skipped":274,"failed":3,"failures":["[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:04:32.407: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context-test
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
    [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep  2 12:04:32.514: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-297c679e-b27f-4a83-97b3-6649f929285b" in namespace "security-context-test-3033" to be "Succeeded or Failed"

    Sep  2 12:04:32.528: INFO: Pod "busybox-readonly-false-297c679e-b27f-4a83-97b3-6649f929285b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.672595ms
    Sep  2 12:04:34.535: INFO: Pod "busybox-readonly-false-297c679e-b27f-4a83-97b3-6649f929285b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021109235s
    Sep  2 12:04:36.543: INFO: Pod "busybox-readonly-false-297c679e-b27f-4a83-97b3-6649f929285b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028776616s
    Sep  2 12:04:36.543: INFO: Pod "busybox-readonly-false-297c679e-b27f-4a83-97b3-6649f929285b" satisfied condition "Succeeded or Failed"

    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:04:36.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-test-3033" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":274,"failed":3,"failures":["[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:04:38.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-6175" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":20,"skipped":339,"failed":3,"failures":["[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 43 lines ...
    STEP: Destroying namespace "services-5307" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":21,"skipped":357,"failed":3,"failures":["[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:04:55.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-6327" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":22,"skipped":377,"failed":3,"failures":["[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:04:55.670: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0666 on tmpfs
    Sep  2 12:04:55.743: INFO: Waiting up to 5m0s for pod "pod-e451dd58-4e0a-43cd-83eb-4efeaa5f28e2" in namespace "emptydir-8729" to be "Succeeded or Failed"

    Sep  2 12:04:55.749: INFO: Pod "pod-e451dd58-4e0a-43cd-83eb-4efeaa5f28e2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05587ms
    Sep  2 12:04:57.755: INFO: Pod "pod-e451dd58-4e0a-43cd-83eb-4efeaa5f28e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01235623s
    Sep  2 12:04:59.762: INFO: Pod "pod-e451dd58-4e0a-43cd-83eb-4efeaa5f28e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019027916s
    STEP: Saw pod success
    Sep  2 12:04:59.762: INFO: Pod "pod-e451dd58-4e0a-43cd-83eb-4efeaa5f28e2" satisfied condition "Succeeded or Failed"

    Sep  2 12:04:59.768: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-worker-k0ejuu pod pod-e451dd58-4e0a-43cd-83eb-4efeaa5f28e2 container test-container: <nil>
    STEP: delete the pod
    Sep  2 12:04:59.817: INFO: Waiting for pod pod-e451dd58-4e0a-43cd-83eb-4efeaa5f28e2 to disappear
    Sep  2 12:04:59.821: INFO: Pod pod-e451dd58-4e0a-43cd-83eb-4efeaa5f28e2 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:04:59.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-8729" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":385,"failed":3,"failures":["[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] Projected combined
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-projected-all-test-volume-975fbfd9-7ead-45b3-b9e6-3593e0d3407f
    STEP: Creating secret with name secret-projected-all-test-volume-276f1d3b-4289-4e9b-a350-148ddeed0b56
    STEP: Creating a pod to test Check all projections for projected volume plugin
    Sep  2 12:04:59.920: INFO: Waiting up to 5m0s for pod "projected-volume-f5e2ba1f-b0b3-41c4-8f9b-0f2871872310" in namespace "projected-9404" to be "Succeeded or Failed"

    Sep  2 12:04:59.925: INFO: Pod "projected-volume-f5e2ba1f-b0b3-41c4-8f9b-0f2871872310": Phase="Pending", Reason="", readiness=false. Elapsed: 4.905481ms
    Sep  2 12:05:01.932: INFO: Pod "projected-volume-f5e2ba1f-b0b3-41c4-8f9b-0f2871872310": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012022829s
    Sep  2 12:05:03.939: INFO: Pod "projected-volume-f5e2ba1f-b0b3-41c4-8f9b-0f2871872310": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019126261s
    STEP: Saw pod success
    Sep  2 12:05:03.939: INFO: Pod "projected-volume-f5e2ba1f-b0b3-41c4-8f9b-0f2871872310" satisfied condition "Succeeded or Failed"

    Sep  2 12:05:03.944: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-worker-k0ejuu pod projected-volume-f5e2ba1f-b0b3-41c4-8f9b-0f2871872310 container projected-all-volume-test: <nil>
    STEP: delete the pod
    Sep  2 12:05:03.974: INFO: Waiting for pod projected-volume-f5e2ba1f-b0b3-41c4-8f9b-0f2871872310 to disappear
    Sep  2 12:05:03.984: INFO: Pod projected-volume-f5e2ba1f-b0b3-41c4-8f9b-0f2871872310 no longer exists
    [AfterEach] [sig-storage] Projected combined
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:05:03.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-9404" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":386,"failed":3,"failures":["[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:05:22.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-4280" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":25,"skipped":398,"failed":3,"failures":["[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Servers with support for Table transformation
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:05:22.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "tables-4014" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":26,"skipped":436,"failed":3,"failures":["[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:05:39.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-538" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":27,"skipped":437,"failed":3,"failures":["[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:05:49.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-1617" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":28,"skipped":438,"failed":3,"failures":["[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 31 lines ...
    Sep  2 11:52:51.570: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.1.25:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 11:52:51.570: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 11:52:51.647: INFO: Found all 1 expected endpoints: [netserver-1]
    Sep  2 11:52:51.647: INFO: Going to poll 192.168.2.22 on port 8083 at least 0 times, with a maximum of 46 tries before failing
    Sep  2 11:52:51.650: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 11:52:51.650: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 11:53:06.734: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 11:53:06.734: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 11:53:08.741: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 11:53:08.741: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 11:53:23.818: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 11:53:23.818: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 11:53:25.825: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 11:53:25.825: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 11:53:40.918: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 11:53:40.918: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 11:53:42.922: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 11:53:42.922: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 11:53:58.020: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 11:53:58.020: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 11:54:00.026: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 11:54:00.026: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 11:54:15.103: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 11:54:15.103: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 11:54:17.107: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 11:54:17.107: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 11:54:32.185: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 11:54:32.185: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 11:54:34.190: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 11:54:34.190: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 11:54:49.275: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 11:54:49.275: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 11:54:51.280: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 11:54:51.280: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 11:55:06.375: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 11:55:06.375: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 11:55:08.380: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 11:55:08.380: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 11:55:23.467: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 11:55:23.467: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 11:55:25.473: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 11:55:25.473: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 11:55:40.592: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 11:55:40.592: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 11:55:42.597: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 11:55:42.597: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 11:55:57.697: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 11:55:57.697: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 11:55:59.702: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 11:55:59.702: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 11:56:14.793: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 11:56:14.793: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 11:56:16.798: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 11:56:16.798: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 11:56:31.899: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 11:56:31.899: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 11:56:33.904: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 11:56:33.904: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 11:56:48.975: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 11:56:48.975: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 11:56:50.980: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 11:56:50.980: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 11:57:06.070: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 11:57:06.070: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 11:57:08.075: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 11:57:08.075: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 11:57:23.171: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 11:57:23.172: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 11:57:25.177: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 11:57:25.177: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 11:57:40.261: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 11:57:40.261: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 11:57:42.266: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 11:57:42.266: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 11:57:57.348: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 11:57:57.349: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 11:57:59.353: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 11:57:59.353: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 11:58:14.436: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 11:58:14.436: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 11:58:16.441: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 11:58:16.441: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 11:58:31.523: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 11:58:31.523: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 11:58:33.528: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 11:58:33.529: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 11:58:48.639: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 11:58:48.639: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 11:58:50.644: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 11:58:50.644: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 11:59:05.754: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 11:59:05.754: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 11:59:07.757: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 11:59:07.757: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 11:59:22.838: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 11:59:22.838: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 11:59:24.843: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 11:59:24.843: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 11:59:39.924: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 11:59:39.924: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 11:59:41.929: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 11:59:41.930: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 11:59:57.021: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 11:59:57.021: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 11:59:59.025: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 11:59:59.025: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 12:00:14.103: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 12:00:14.103: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 12:00:16.109: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 12:00:16.109: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 12:00:31.193: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 12:00:31.193: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 12:00:33.197: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 12:00:33.197: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 12:00:48.278: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 12:00:48.278: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 12:00:50.285: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 12:00:50.285: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 12:01:05.420: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 12:01:05.420: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 12:01:07.425: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 12:01:07.425: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 12:01:22.488: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 12:01:22.488: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 12:01:24.491: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 12:01:24.491: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 12:01:39.572: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 12:01:39.572: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 12:01:41.581: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 12:01:41.581: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 12:01:56.687: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 12:01:56.687: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 12:01:58.691: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 12:01:58.691: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 12:02:13.772: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 12:02:13.772: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 12:02:15.777: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 12:02:15.777: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 12:02:30.867: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 12:02:30.867: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 12:02:32.871: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 12:02:32.871: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 12:02:47.974: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 12:02:47.974: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 12:02:49.979: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 12:02:49.979: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 12:03:05.080: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 12:03:05.080: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 12:03:07.089: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 12:03:07.090: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 12:03:22.208: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 12:03:22.209: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 12:03:24.215: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 12:03:24.215: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 12:03:39.340: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 12:03:39.340: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 12:03:41.348: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 12:03:41.348: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 12:03:56.493: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 12:03:56.493: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 12:03:58.500: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 12:03:58.500: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 12:04:13.631: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 12:04:13.632: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 12:04:15.640: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 12:04:15.640: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 12:04:30.777: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 12:04:30.777: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 12:04:32.785: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 12:04:32.785: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 12:04:47.969: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 12:04:47.969: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 12:04:49.976: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 12:04:49.977: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 12:05:05.139: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 12:05:05.139: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 12:05:07.146: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 12:05:07.146: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 12:05:22.305: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 12:05:22.305: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 12:05:24.313: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 12:05:24.313: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 12:05:39.442: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 12:05:39.442: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 12:05:41.451: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5392 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 12:05:41.451: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 12:05:56.606: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""

    Sep  2 12:05:56.606: INFO: Waiting for [netserver-2] endpoints (expected=[netserver-2], actual=[])
    Sep  2 12:05:58.607: INFO: 
    Output of kubectl describe pod pod-network-test-5392/netserver-0:
    
    Sep  2 12:05:58.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-5392 describe pod netserver-0 --namespace=pod-network-test-5392'
    Sep  2 12:05:58.807: INFO: stderr: ""
... skipping 237 lines ...
      ----    ------     ----  ----               -------
      Normal  Scheduled  13m   default-scheduler  Successfully assigned pod-network-test-5392/netserver-3 to k8s-upgrade-and-conformance-rxa2hz-worker-k0ejuu
      Normal  Pulled     13m   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine
      Normal  Created    13m   kubelet            Created container webserver
      Normal  Started    13m   kubelet            Started container webserver
    
    Sep  2 12:05:59.380: FAIL: Error dialing HTTP node to pod failed to find expected endpoints, 

    tries 46
    Command curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName
    retrieved map[]
    expected map[netserver-2:{}]
    
    Full Stack Trace
... skipping 16 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
      Granular Checks: Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
        should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] [It]
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
        Sep  2 12:05:59.380: Error dialing HTTP node to pod failed to find expected endpoints, 

        tries 46
        Command curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.22:8083/hostName
        retrieved map[]
        expected map[netserver-2:{}]
    
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-configmap-qvsj
    STEP: Creating a pod to test atomic-volume-subpath
    Sep  2 12:05:50.197: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-qvsj" in namespace "subpath-1402" to be "Succeeded or Failed"

    Sep  2 12:05:50.203: INFO: Pod "pod-subpath-test-configmap-qvsj": Phase="Pending", Reason="", readiness=false. Elapsed: 5.923533ms
    Sep  2 12:05:52.212: INFO: Pod "pod-subpath-test-configmap-qvsj": Phase="Running", Reason="", readiness=true. Elapsed: 2.014101948s
    Sep  2 12:05:54.219: INFO: Pod "pod-subpath-test-configmap-qvsj": Phase="Running", Reason="", readiness=true. Elapsed: 4.021491474s
    Sep  2 12:05:56.227: INFO: Pod "pod-subpath-test-configmap-qvsj": Phase="Running", Reason="", readiness=true. Elapsed: 6.029073406s
    Sep  2 12:05:58.236: INFO: Pod "pod-subpath-test-configmap-qvsj": Phase="Running", Reason="", readiness=true. Elapsed: 8.038462749s
    Sep  2 12:06:00.253: INFO: Pod "pod-subpath-test-configmap-qvsj": Phase="Running", Reason="", readiness=true. Elapsed: 10.055198418s
... skipping 2 lines ...
    Sep  2 12:06:06.279: INFO: Pod "pod-subpath-test-configmap-qvsj": Phase="Running", Reason="", readiness=true. Elapsed: 16.081109212s
    Sep  2 12:06:08.287: INFO: Pod "pod-subpath-test-configmap-qvsj": Phase="Running", Reason="", readiness=true. Elapsed: 18.089449895s
    Sep  2 12:06:10.295: INFO: Pod "pod-subpath-test-configmap-qvsj": Phase="Running", Reason="", readiness=true. Elapsed: 20.097845518s
    Sep  2 12:06:12.303: INFO: Pod "pod-subpath-test-configmap-qvsj": Phase="Running", Reason="", readiness=false. Elapsed: 22.105560985s
    Sep  2 12:06:14.312: INFO: Pod "pod-subpath-test-configmap-qvsj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.114638415s
    STEP: Saw pod success
    Sep  2 12:06:14.312: INFO: Pod "pod-subpath-test-configmap-qvsj" satisfied condition "Succeeded or Failed"

    Sep  2 12:06:14.318: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-worker-k0ejuu pod pod-subpath-test-configmap-qvsj container test-container-subpath-configmap-qvsj: <nil>
    STEP: delete the pod
    Sep  2 12:06:14.343: INFO: Waiting for pod pod-subpath-test-configmap-qvsj to disappear
    Sep  2 12:06:14.347: INFO: Pod pod-subpath-test-configmap-qvsj no longer exists
    STEP: Deleting pod pod-subpath-test-configmap-qvsj
    Sep  2 12:06:14.347: INFO: Deleting pod "pod-subpath-test-configmap-qvsj" in namespace "subpath-1402"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:06:14.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-1402" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":29,"skipped":473,"failed":3,"failures":["[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide container's cpu limit [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  2 12:06:14.637: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a04d4cb1-158f-4913-840d-133a031e5871" in namespace "downward-api-6069" to be "Succeeded or Failed"

    Sep  2 12:06:14.644: INFO: Pod "downwardapi-volume-a04d4cb1-158f-4913-840d-133a031e5871": Phase="Pending", Reason="", readiness=false. Elapsed: 6.435076ms
    Sep  2 12:06:16.653: INFO: Pod "downwardapi-volume-a04d4cb1-158f-4913-840d-133a031e5871": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015235738s
    Sep  2 12:06:18.659: INFO: Pod "downwardapi-volume-a04d4cb1-158f-4913-840d-133a031e5871": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021366131s
    STEP: Saw pod success
    Sep  2 12:06:18.659: INFO: Pod "downwardapi-volume-a04d4cb1-158f-4913-840d-133a031e5871" satisfied condition "Succeeded or Failed"

    Sep  2 12:06:18.664: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-worker-k0ejuu pod downwardapi-volume-a04d4cb1-158f-4913-840d-133a031e5871 container client-container: <nil>
    STEP: delete the pod
    Sep  2 12:06:18.687: INFO: Waiting for pod downwardapi-volume-a04d4cb1-158f-4913-840d-133a031e5871 to disappear
    Sep  2 12:06:18.691: INFO: Pod downwardapi-volume-a04d4cb1-158f-4913-840d-133a031e5871 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:06:18.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-6069" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":536,"failed":3,"failures":["[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:06:18.843: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-map-b48ecbc9-beb1-4aff-884a-737b468f021a
    STEP: Creating a pod to test consume configMaps
    Sep  2 12:06:18.918: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-04ea93e6-472c-47ca-b94d-7e7398b4533a" in namespace "projected-2889" to be "Succeeded or Failed"

    Sep  2 12:06:18.926: INFO: Pod "pod-projected-configmaps-04ea93e6-472c-47ca-b94d-7e7398b4533a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.754406ms
    Sep  2 12:06:20.932: INFO: Pod "pod-projected-configmaps-04ea93e6-472c-47ca-b94d-7e7398b4533a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014055575s
    Sep  2 12:06:22.938: INFO: Pod "pod-projected-configmaps-04ea93e6-472c-47ca-b94d-7e7398b4533a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020035017s
    STEP: Saw pod success
    Sep  2 12:06:22.938: INFO: Pod "pod-projected-configmaps-04ea93e6-472c-47ca-b94d-7e7398b4533a" satisfied condition "Succeeded or Failed"

    Sep  2 12:06:22.944: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-7zrh5 pod pod-projected-configmaps-04ea93e6-472c-47ca-b94d-7e7398b4533a container agnhost-container: <nil>
    STEP: delete the pod
    Sep  2 12:06:22.979: INFO: Waiting for pod pod-projected-configmaps-04ea93e6-472c-47ca-b94d-7e7398b4533a to disappear
    Sep  2 12:06:22.985: INFO: Pod pod-projected-configmaps-04ea93e6-472c-47ca-b94d-7e7398b4533a no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:06:22.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-2889" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":585,"failed":3,"failures":["[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Lease
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:06:23.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "lease-test-3029" for this suite.
    
    •
    ------------------------------
    {"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":600,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:05:59.409: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename pod-network-test
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 40 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:06:26.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pod-network-test-4052" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":600,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:06:27.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "certificates-217" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":36,"skipped":609,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:06:32.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-4953" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":-1,"completed":37,"skipped":647,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":32,"skipped":620,"failed":3,"failures":["[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

    [BeforeEach] [sig-network] Service endpoints latency
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:06:23.246: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename svc-latency
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 415 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:06:34.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svc-latency-735" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":-1,"completed":33,"skipped":620,"failed":3,"failures":["[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-network] EndpointSlice
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:07:04.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslice-6725" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":34,"skipped":621,"failed":3,"failures":["[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
    STEP: Create set of pods
    Sep  2 12:07:04.307: INFO: created test-pod-1
    Sep  2 12:07:04.313: INFO: created test-pod-2
    Sep  2 12:07:04.319: INFO: created test-pod-3
    STEP: waiting for all 3 pods to be running
    Sep  2 12:07:04.319: INFO: Waiting up to 5m0s for all pods (need at least 3) in namespace 'pods-5390' to be running and ready
    Sep  2 12:07:04.337: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 12:07:04.337: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 12:07:04.337: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  2 12:07:04.337: INFO: 0 / 3 pods in namespace 'pods-5390' are running and ready (0 seconds elapsed)
    Sep  2 12:07:04.337: INFO: expected 0 pod replicas in namespace 'pods-5390', 0 are Running and Ready.
    Sep  2 12:07:04.337: INFO: POD         NODE                                                           PHASE    GRACE  CONDITIONS
    Sep  2 12:07:04.337: INFO: test-pod-1  k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-7zrh5  Pending         [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 12:07:04 +0000 UTC  }]
    Sep  2 12:07:04.337: INFO: test-pod-2  k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-j45dw  Pending         [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 12:07:04 +0000 UTC  }]
    Sep  2 12:07:04.337: INFO: test-pod-3  k8s-upgrade-and-conformance-rxa2hz-worker-k0ejuu               Pending         [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-02 12:07:04 +0000 UTC  }]
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:07:09.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-5390" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":35,"skipped":624,"failed":3,"failures":["[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:07:09.420: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename resourcequota
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:07:16.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-515" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":36,"skipped":624,"failed":3,"failures":["[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 32 lines ...
    
    Sep  2 12:07:20.745: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment":
    &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88  deployment-1660  311fb0be-4645-45f4-9ab4-c25fbcb08722 15147 3 2022-09-02 12:07:18 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 8715428f-af90-49b7-8f4a-af1b16ce8430 0xc003457ce7 0xc003457ce8}] []  [{kube-controller-manager Update apps/v1 2022-09-02 12:07:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8715428f-af90-49b7-8f4a-af1b16ce8430\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-09-02 12:07:18 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003457d88 <nil> ClusterFirst map[]   <nil>  false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
    Sep  2 12:07:20.745: INFO: All old ReplicaSets of Deployment "webserver-deployment":
    Sep  2 12:07:20.746: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-847dcfb7fb  deployment-1660  9baa0deb-b1e9-49c8-8df2-0b58b0310b31 15145 3 2022-09-02 12:07:16 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 8715428f-af90-49b7-8f4a-af1b16ce8430 0xc003457de7 0xc003457de8}] []  [{kube-controller-manager Update apps/v1 2022-09-02 12:07:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8715428f-af90-49b7-8f4a-af1b16ce8430\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-09-02 12:07:18 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [] []  []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003457e78 <nil> ClusterFirst map[]   <nil>  false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
    Sep  2 12:07:20.775: INFO: Pod "webserver-deployment-795d758f88-2nghd" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-2nghd webserver-deployment-795d758f88- deployment-1660  437411e1-7d5c-46f5-8a20-dc71e1fda54c 15131 0 2022-09-02 12:07:18 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 311fb0be-4645-45f4-9ab4-c25fbcb08722 0xc003ea8317 0xc003ea8318}] []  [{kube-controller-manager Update v1 2022-09-02 12:07:18 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"311fb0be-4645-45f4-9ab4-c25fbcb08722\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-02 12:07:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.39\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-b8bw2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b8bw2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-rxa2hz-worker-cznwre,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-02 12:07:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-02 12:07:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-02 12:07:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-02 12:07:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.2.39,StartTime:2022-09-02 12:07:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.39,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep  2 12:07:20.776: INFO: Pod "webserver-deployment-795d758f88-52r7d" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-52r7d webserver-deployment-795d758f88- deployment-1660  713e6c91-1043-4766-ba10-33c23e9c4f4c 15142 0 2022-09-02 12:07:18 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 311fb0be-4645-45f4-9ab4-c25fbcb08722 0xc003ea8520 0xc003ea8521}] []  [{kube-controller-manager Update v1 2022-09-02 12:07:18 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"311fb0be-4645-45f4-9ab4-c25fbcb08722\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-02 12:07:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.52\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-78lmc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-78lmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-j45dw,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-02 12:07:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-02 12:07:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-02 12:07:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-02 12:07:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.1.52,StartTime:2022-09-02 12:07:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.52,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep  2 12:07:20.776: INFO: Pod "webserver-deployment-795d758f88-8md7q" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-8md7q webserver-deployment-795d758f88- deployment-1660  da284089-d268-484f-b58d-57d7b9e3951b 15178 0 2022-09-02 12:07:20 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 311fb0be-4645-45f4-9ab4-c25fbcb08722 0xc003ea8720 0xc003ea8721}] []  [{kube-controller-manager Update v1 2022-09-02 12:07:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"311fb0be-4645-45f4-9ab4-c25fbcb08722\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4cvww,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4cvww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep  2 12:07:20.777: INFO: Pod "webserver-deployment-795d758f88-cwt4h" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-cwt4h webserver-deployment-795d758f88- deployment-1660  9051ec6d-e56e-46ed-8e95-d210835621d0 15138 0 2022-09-02 12:07:18 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 311fb0be-4645-45f4-9ab4-c25fbcb08722 0xc003ea8867 0xc003ea8868}] []  [{kube-controller-manager Update v1 2022-09-02 12:07:18 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"311fb0be-4645-45f4-9ab4-c25fbcb08722\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-02 12:07:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.82\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dfrvl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dfrvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-rxa2hz-worker-k0ejuu,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-02 12:07:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-02 12:07:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-02 12:07:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-02 12:07:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.6.82,StartTime:2022-09-02 12:07:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.82,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep  2 12:07:20.777: INFO: Pod "webserver-deployment-795d758f88-d6ss8" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-d6ss8 webserver-deployment-795d758f88- deployment-1660  c5c85d4d-5415-48f2-8d2c-1896efb89c58 15166 0 2022-09-02 12:07:20 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 311fb0be-4645-45f4-9ab4-c25fbcb08722 0xc003ea8a70 0xc003ea8a71}] []  [{kube-controller-manager Update v1 2022-09-02 12:07:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"311fb0be-4645-45f4-9ab4-c25fbcb08722\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-r9pm8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r9pm8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep  2 12:07:20.777: INFO: Pod "webserver-deployment-795d758f88-dx859" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-dx859 webserver-deployment-795d758f88- deployment-1660  74256e49-6e73-4ed0-b009-9c1ecf46423e 15187 0 2022-09-02 12:07:20 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 311fb0be-4645-45f4-9ab4-c25fbcb08722 0xc003ea8bb7 0xc003ea8bb8}] []  [{kube-controller-manager Update v1 2022-09-02 12:07:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"311fb0be-4645-45f4-9ab4-c25fbcb08722\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-74vlp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-74vlp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep  2 12:07:20.778: INFO: Pod "webserver-deployment-795d758f88-kwhjk" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-kwhjk webserver-deployment-795d758f88- deployment-1660  eb23125f-2e46-4d7f-a66b-f410fee21b2c 15183 0 2022-09-02 12:07:20 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 311fb0be-4645-45f4-9ab4-c25fbcb08722 0xc003ea8d07 0xc003ea8d08}] []  [{kube-controller-manager Update v1 2022-09-02 12:07:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"311fb0be-4645-45f4-9ab4-c25fbcb08722\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-02 12:07:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-v4jlq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v4jlq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-rxa2hz-worker-k0ejuu,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-02 12:07:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-02 12:07:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-02 12:07:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-02 12:07:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2022-09-02 12:07:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep  2 12:07:20.778: INFO: Pod "webserver-deployment-795d758f88-lkd9b" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-lkd9b webserver-deployment-795d758f88- deployment-1660  2d45382b-8f71-409c-bacf-17bc7a8ff8d5 15175 0 2022-09-02 12:07:20 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 311fb0be-4645-45f4-9ab4-c25fbcb08722 0xc003ea8ee0 0xc003ea8ee1}] []  [{kube-controller-manager Update v1 2022-09-02 12:07:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"311fb0be-4645-45f4-9ab4-c25fbcb08722\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-24gv7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-24gv7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep  2 12:07:20.778: INFO: Pod "webserver-deployment-795d758f88-rzm2t" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-rzm2t webserver-deployment-795d758f88- deployment-1660  a891fff9-8196-4576-9f8e-33c1e1e38735 15165 0 2022-09-02 12:07:20 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 311fb0be-4645-45f4-9ab4-c25fbcb08722 0xc003ea9027 0xc003ea9028}] []  [{kube-controller-manager Update v1 2022-09-02 12:07:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"311fb0be-4645-45f4-9ab4-c25fbcb08722\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-g642s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g642s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep  2 12:07:20.779: INFO: Pod "webserver-deployment-795d758f88-smdh4" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-smdh4 webserver-deployment-795d758f88- deployment-1660  8e0bfa1a-8060-4a5f-825b-a1f4b46e5b77 15133 0 2022-09-02 12:07:18 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 311fb0be-4645-45f4-9ab4-c25fbcb08722 0xc003ea9177 0xc003ea9178}] []  [{kube-controller-manager Update v1 2022-09-02 12:07:18 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"311fb0be-4645-45f4-9ab4-c25fbcb08722\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-02 12:07:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.90\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-66gv6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-66gv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-7zrh5,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-02 12:07:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-02 12:07:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-02 12:07:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-02 12:07:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.90,StartTime:2022-09-02 12:07:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.90,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep  2 12:07:20.779: INFO: Pod "webserver-deployment-795d758f88-smj9x" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-smj9x webserver-deployment-795d758f88- deployment-1660  9e2b4e95-fb9e-45de-a993-c2034539ca58 15128 0 2022-09-02 12:07:18 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 311fb0be-4645-45f4-9ab4-c25fbcb08722 0xc003ea9380 0xc003ea9381}] []  [{kube-controller-manager Update v1 2022-09-02 12:07:18 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"311fb0be-4645-45f4-9ab4-c25fbcb08722\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-02 12:07:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.89\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zqrpn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zqrpn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-7zrh5,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-02 12:07:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-02 12:07:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-02 12:07:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-02 12:07:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.89,StartTime:2022-09-02 12:07:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.89,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep  2 12:07:20.780: INFO: Pod "webserver-deployment-795d758f88-t8m4k" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-t8m4k webserver-deployment-795d758f88- deployment-1660  97b40699-ff6c-40ea-80bc-bd25c3dd079d 15181 0 2022-09-02 12:07:20 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 311fb0be-4645-45f4-9ab4-c25fbcb08722 0xc003ea9580 0xc003ea9581}] []  [{kube-controller-manager Update v1 2022-09-02 12:07:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"311fb0be-4645-45f4-9ab4-c25fbcb08722\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-65rf7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-65rf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep  2 12:07:20.780: INFO: Pod "webserver-deployment-795d758f88-vfmpj" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-vfmpj webserver-deployment-795d758f88- deployment-1660  32311b21-f20d-4b1b-85a7-5568d0c89f26 15174 0 2022-09-02 12:07:20 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 311fb0be-4645-45f4-9ab4-c25fbcb08722 0xc003ea96c7 0xc003ea96c8}] []  [{kube-controller-manager Update v1 2022-09-02 12:07:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"311fb0be-4645-45f4-9ab4-c25fbcb08722\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9cnjk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9cnjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep  2 12:07:20.780: INFO: Pod "webserver-deployment-847dcfb7fb-2dbwf" is available:
    &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-2dbwf webserver-deployment-847dcfb7fb- deployment-1660  c732c818-be7e-414b-b91d-9dd2bc648935 15046 0 2022-09-02 12:07:16 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 9baa0deb-b1e9-49c8-8df2-0b58b0310b31 0xc003ea9817 0xc003ea9818}] []  [{kube-controller-manager Update v1 2022-09-02 12:07:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9baa0deb-b1e9-49c8-8df2-0b58b0310b31\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-02 12:07:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.80\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dhstv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dhstv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-rxa2hz-worker-k0ejuu,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-02 12:07:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-02 12:07:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-02 12:07:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-02 12:07:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.6.80,StartTime:2022-09-02 12:07:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-09-02 12:07:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://048f65a12172b5b4a7f1ad21b05f91ad6c61ea93748097ae05a740188082970d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.80,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
... skipping 39 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:07:20.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-1660" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":37,"skipped":645,"failed":3,"failures":["[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 16 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:07:26.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-9120" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":665,"failed":3,"failures":["[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
    STEP: Destroying namespace "webhook-7065-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":39,"skipped":667,"failed":3,"failures":["[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] PreStop
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 225 lines ...
    		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
    		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
    		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
    	],
    	"StillContactingPeers": true
    }
    Sep  2 12:07:36.783: FAIL: validating pre-stop.

    Unexpected error:

        <*errors.errorString | 0xc000242280>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 21 lines ...
    [sig-node] PreStop
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
      should call prestop when killing a pod  [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  2 12:07:36.783: validating pre-stop.
      Unexpected error:

          <*errors.errorString | 0xc000242280>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
... skipping 29 lines ...
    STEP: Destroying namespace "webhook-1684-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":40,"skipped":683,"failed":3,"failures":["[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:07:38.660: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep  2 12:07:40.749: INFO: Deleting pod "var-expansion-256babbf-255d-435c-9bda-d79e7a26fd93" in namespace "var-expansion-9500"
    Sep  2 12:07:40.754: INFO: Wait up to 5m0s for pod "var-expansion-256babbf-255d-435c-9bda-d79e7a26fd93" to be fully deleted
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:07:42.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-9500" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":-1,"completed":41,"skipped":683,"failed":3,"failures":["[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}

    
    SSSSS
    ------------------------------
    {"msg":"FAILED [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":-1,"completed":37,"skipped":668,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    [BeforeEach] [sig-node] PreStop
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:07:36.811: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename prestop
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 23 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:07:45.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "prestop-8139" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":-1,"completed":38,"skipped":668,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 41 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:07:56.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-4865" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":39,"skipped":688,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:07:56.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-1362" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":726,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
    
    STEP: creating a pod to probe DNS
    STEP: submitting the pod to kubernetes
    STEP: retrieving the pod
    STEP: looking for the results for each expected name from probers
    Sep  2 12:07:58.222: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-4307.svc.cluster.local from pod dns-4307/dns-test-d187fcbc-ea4b-4bd9-b5cf-c1027a5fef85: the server is currently unable to handle the request (get pods dns-test-d187fcbc-ea4b-4bd9-b5cf-c1027a5fef85)
    Sep  2 12:09:25.392: FAIL: Unable to read jessie_udp@dns-test-service-3.dns-4307.svc.cluster.local from pod dns-4307/dns-test-d187fcbc-ea4b-4bd9-b5cf-c1027a5fef85: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-4307/pods/dns-test-d187fcbc-ea4b-4bd9-b5cf-c1027a5fef85/proxy/results/jessie_udp@dns-test-service-3.dns-4307.svc.cluster.local": context deadline exceeded

    
    Full Stack Trace
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x78de4a8, 0xc000054090, 0x7f2d50303108, 0x18, 0xc0019bcea0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x78de4a8, 0xc000054090, 0xc003ee87a0, 0x2a14500, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f
... skipping 15 lines ...
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b
    testing.tRunner(0xc00031bb00, 0x729a2d8)
    	/usr/local/go/src/testing/testing.go:1203 +0xe5
    created by testing.(*T).Run
    	/usr/local/go/src/testing/testing.go:1248 +0x2b3
    E0902 12:09:25.392908      19 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Sep  2 12:09:25.392: Unable to read jessie_udp@dns-test-service-3.dns-4307.svc.cluster.local from pod dns-4307/dns-test-d187fcbc-ea4b-4bd9-b5cf-c1027a5fef85: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-4307/pods/dns-test-d187fcbc-ea4b-4bd9-b5cf-c1027a5fef85/proxy/results/jessie_udp@dns-test-service-3.dns-4307.svc.cluster.local\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:217, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x78de4a8, 0xc000054090, 0x7f2d50303108, 0x18, 0xc0019bcea0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x78de4a8, 0xc000054090, 0xc003ee87a0, 0x2a14500, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll(0x78de4a8, 0xc000054090, 0xc0019bce01, 0xc0019bcea0, 0xc003ee87a0, 0x6826620, 0xc003ee87a0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:577 +0xe5\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext(0x78de4a8, 0xc000054090, 0x12a05f200, 0x8bb2c97000, 0xc003ee87a0, 0x6d6e4e0, 0x2521201)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:523 +0x66\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc002759110, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:509 +0x6f\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc003402560, 0x2, 0x2, 0x702fe9b, 0x7, 0xc00429d400, 0x7971668, 0xc002d1f1e0, 0x1, 0x70515b7, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463 +0x13c\nk8s.io/kubernetes/test/e2e/network.validateTargetedProbeOutput(0xc000d39600, 0xc00429d400, 0xc003402560, 0x2, 0x2, 0x70515b7, 0x10)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:548 +0x376\nk8s.io/kubernetes/test/e2e/network.glob..func2.9()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:354 +0x6ed\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc00031bb00)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc00031bb00)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b\ntesting.tRunner(0xc00031bb00, 0x729a2d8)\n\t/usr/local/go/src/testing/testing.go:1203 +0xe5\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1248 +0x2b3"} (
    Your test failed.

    Ginkgo panics to prevent subsequent assertions from running.
    Normally Ginkgo rescues this panic so you shouldn't see it.
    
    But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
    To circumvent this, you should call
    
... skipping 5 lines ...
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6bbe4c0, 0xc0049b69c0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
    panic(0x6bbe4c0, 0xc0049b69c0)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc003f6a180, 0x16b, 0x88abe86, 0x7d, 0xd9, 0xc003f70a80, 0x9fe)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
    panic(0x62ef260, 0x77956f0)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc003f6a180, 0x16b, 0xc003935e88, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:260 +0xc8
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc003f6a180, 0x16b, 0xc003935f70, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
    k8s.io/kubernetes/test/e2e/framework.Failf(0x70d3e4f, 0x24, 0xc0039361d0, 0x4, 0x4)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
    k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:480 +0xab1
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x78de4a8, 0xc000054090, 0x7f2d50303108, 0x18, 0xc0019bcea0)
... skipping 57 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  2 12:09:25.392: Unable to read jessie_udp@dns-test-service-3.dns-4307.svc.cluster.local from pod dns-4307/dns-test-d187fcbc-ea4b-4bd9-b5cf-c1027a5fef85: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-4307/pods/dns-test-d187fcbc-ea4b-4bd9-b5cf-c1027a5fef85/proxy/results/jessie_udp@dns-test-service-3.dns-4307.svc.cluster.local": context deadline exceeded
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217
    ------------------------------
    {"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":510,"failed":3,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:03:48.715: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename pod-network-test
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 274 lines ...
      ----    ------     ----  ----               -------
      Normal  Scheduled  23s   default-scheduler  Successfully assigned pod-network-test-9387/netserver-3 to k8s-upgrade-and-conformance-rxa2hz-worker-k0ejuu
      Normal  Pulled     22s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine
      Normal  Created    22s   kubelet            Created container webserver
      Normal  Started    22s   kubelet            Started container webserver
    
    Sep  2 12:04:11.191: INFO: encountered error during dial (did not find expected responses... 

    Tries 1
    Command curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8083&tries=1'
    retrieved map[]
    expected map[netserver-2:{}])
    Sep  2 12:04:11.191: INFO: ...failed...will try again in next pass

    Sep  2 12:04:11.191: INFO: Breadth first check of 192.168.6.69 on host 172.18.0.6...
    Sep  2 12:04:11.198: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.6.69&port=8083&tries=1'] Namespace:pod-network-test-9387 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  2 12:04:11.198: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  2 12:04:11.339: INFO: Waiting for responses: map[]
    Sep  2 12:04:11.339: INFO: reached 192.168.6.69 after 0/1 tries
    Sep  2 12:04:11.339: INFO: Going to retry 1 out of 4 pods....
... skipping 382 lines ...
      ----    ------     ----   ----               -------
      Normal  Scheduled  5m51s  default-scheduler  Successfully assigned pod-network-test-9387/netserver-3 to k8s-upgrade-and-conformance-rxa2hz-worker-k0ejuu
      Normal  Pulled     5m50s  kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine
      Normal  Created    5m50s  kubelet            Created container webserver
      Normal  Started    5m50s  kubelet            Started container webserver
    
    Sep  2 12:09:39.674: INFO: encountered error during dial (did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8083&tries=1'
    retrieved map[]
    expected map[netserver-2:{}])
    Sep  2 12:09:39.674: INFO: ... Done probing pod [[[ 192.168.2.33 ]]]
    Sep  2 12:09:39.674: INFO: succeeded at polling 3 out of 4 connections
    Sep  2 12:09:39.674: INFO: pod polling failure summary:
    Sep  2 12:09:39.674: INFO: Collected error: did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.1.47:9080/dial?request=hostname&protocol=http&host=192.168.2.33&port=8083&tries=1'
    retrieved map[]
    expected map[netserver-2:{}]
    Sep  2 12:09:39.674: FAIL: failed,  1 out of 4 connections failed

    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/common/network.glob..func1.1.2()
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82 +0x69
    k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0009f8a80)
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 14 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
      Granular Checks: Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
        should function for intra-pod communication: http [NodeConformance] [Conformance] [It]
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
        Sep  2 12:09:39.674: failed,  1 out of 4 connections failed

    
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82
    ------------------------------
    {"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":510,"failed":4,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 110 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:09:44.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-6838" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":27,"skipped":555,"failed":4,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 45 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:10:25.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-749" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":28,"skipped":556,"failed":4,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:07:56.601: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating the pod with failed condition

    STEP: updating the pod
    Sep  2 12:09:57.193: INFO: Successfully updated pod "var-expansion-7f59feda-5555-4291-96b1-b15797ed8c53"
    STEP: waiting for pod running
    STEP: deleting the pod gracefully
    Sep  2 12:09:59.202: INFO: Deleting pod "var-expansion-7f59feda-5555-4291-96b1-b15797ed8c53" in namespace "var-expansion-6482"
    Sep  2 12:09:59.208: INFO: Wait up to 5m0s for pod "var-expansion-7f59feda-5555-4291-96b1-b15797ed8c53" to be fully deleted
... skipping 6 lines ...
    • [SLOW TEST:154.626 seconds]
    [sig-node] Variable Expansion
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":-1,"completed":41,"skipped":752,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 35 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:10:32.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-4285" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":-1,"completed":29,"skipped":592,"failed":4,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide container's memory request [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  2 12:10:31.291: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0efad269-d789-40ee-8f0b-1dd39141023e" in namespace "projected-6673" to be "Succeeded or Failed"

    Sep  2 12:10:31.296: INFO: Pod "downwardapi-volume-0efad269-d789-40ee-8f0b-1dd39141023e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.273399ms
    Sep  2 12:10:33.301: INFO: Pod "downwardapi-volume-0efad269-d789-40ee-8f0b-1dd39141023e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009695604s
    Sep  2 12:10:35.305: INFO: Pod "downwardapi-volume-0efad269-d789-40ee-8f0b-1dd39141023e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013520917s
    STEP: Saw pod success
    Sep  2 12:10:35.305: INFO: Pod "downwardapi-volume-0efad269-d789-40ee-8f0b-1dd39141023e" satisfied condition "Succeeded or Failed"

    Sep  2 12:10:35.309: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-j45dw pod downwardapi-volume-0efad269-d789-40ee-8f0b-1dd39141023e container client-container: <nil>
    STEP: delete the pod
    Sep  2 12:10:35.341: INFO: Waiting for pod downwardapi-volume-0efad269-d789-40ee-8f0b-1dd39141023e to disappear
    Sep  2 12:10:35.345: INFO: Pod downwardapi-volume-0efad269-d789-40ee-8f0b-1dd39141023e no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:10:35.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-6673" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":769,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide podname only [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  2 12:10:33.027: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a0f5aa52-c451-46a5-8b1d-461b0fc735d5" in namespace "projected-1213" to be "Succeeded or Failed"

    Sep  2 12:10:33.030: INFO: Pod "downwardapi-volume-a0f5aa52-c451-46a5-8b1d-461b0fc735d5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.042187ms
    Sep  2 12:10:35.036: INFO: Pod "downwardapi-volume-a0f5aa52-c451-46a5-8b1d-461b0fc735d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008943289s
    Sep  2 12:10:37.041: INFO: Pod "downwardapi-volume-a0f5aa52-c451-46a5-8b1d-461b0fc735d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014074472s
    STEP: Saw pod success
    Sep  2 12:10:37.041: INFO: Pod "downwardapi-volume-a0f5aa52-c451-46a5-8b1d-461b0fc735d5" satisfied condition "Succeeded or Failed"

    Sep  2 12:10:37.044: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-7zrh5 pod downwardapi-volume-a0f5aa52-c451-46a5-8b1d-461b0fc735d5 container client-container: <nil>
    STEP: delete the pod
    Sep  2 12:10:37.060: INFO: Waiting for pod downwardapi-volume-a0f5aa52-c451-46a5-8b1d-461b0fc735d5 to disappear
    Sep  2 12:10:37.064: INFO: Pod downwardapi-volume-a0f5aa52-c451-46a5-8b1d-461b0fc735d5 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:10:37.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-1213" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":601,"failed":4,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 48 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:10:39.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-2512" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":-1,"completed":31,"skipped":605,"failed":4,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:10:39.999: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should fail to create ConfigMap with empty key [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap that has name configmap-test-emptyKey-9c020f35-a131-4d8b-9b45-bc0a9e8c3b01
    [AfterEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:10:40.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-4049" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":32,"skipped":622,"failed":4,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:11:00.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-6224" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":782,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 102 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:11:41.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-6023" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":-1,"completed":33,"skipped":630,"failed":4,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:11:52.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-probe-2370" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":793,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:11:53.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-2992" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":-1,"completed":45,"skipped":804,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:11:55.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-419" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":46,"skipped":829,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:11:55.287: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:12:01.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-5533" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":47,"skipped":829,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:12:01.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-3217" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":48,"skipped":833,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Events
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:12:08.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-7362" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":-1,"completed":49,"skipped":852,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:12:14.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-4791" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":50,"skipped":863,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] PodTemplates
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:12:14.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "podtemplate-9201" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":51,"skipped":873,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] version v1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 336 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:12:19.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "proxy-6616" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":-1,"completed":52,"skipped":888,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] PodTemplates
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:12:19.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "podtemplate-8084" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":53,"skipped":930,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:11:41.598: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename init-container
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
    [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating the pod
    Sep  2 12:11:41.636: INFO: PodSpec: initContainers in spec.initContainers
    Sep  2 12:12:26.613: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-3b2729f5-472a-428f-ba14-dc4bc6e04c52", GenerateName:"", Namespace:"init-container-2178", SelfLink:"", UID:"0ab5d94a-2f0a-45d4-8a37-1f45d9c31610", ResourceVersion:"17850", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63797717501, loc:(*time.Location)(0xa04a040)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"636530180"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00392d320), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00392d338), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00392d350), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00392d368), Subresource:"status"}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-nq97c", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc002c41bc0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-nq97c", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-nq97c", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-nq97c", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00465b0b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-j45dw", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000e4f650), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00465b130)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00465b150)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00465b158), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00465b15c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0029033c0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63797717501, loc:(*time.Location)(0xa04a040)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63797717501, loc:(*time.Location)(0xa04a040)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63797717501, loc:(*time.Location)(0xa04a040)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63797717501, loc:(*time.Location)(0xa04a040)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.5", PodIP:"192.168.1.66", PodIPs:[]v1.PodIP{v1.PodIP{IP:"192.168.1.66"}}, StartTime:(*v1.Time)(0xc00392d398), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000e4f730)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000e4f810)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"containerd://3cf56a8a16958b2a96ab13cfce108775c250dc1ac740b3def5f7a9ace94266ea", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002c41da0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002c41d20), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.5", ImageID:"", ContainerID:"", Started:(*bool)(0xc00465b1df)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}

    [AfterEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:12:26.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "init-container-2178" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":34,"skipped":639,"failed":4,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 48 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:12:43.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pod-network-test-7560" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":54,"skipped":944,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:12:43.975: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0644 on node default medium
    Sep  2 12:12:44.023: INFO: Waiting up to 5m0s for pod "pod-a9c45dab-5a45-48fe-be57-a10ce9ffeff9" in namespace "emptydir-2678" to be "Succeeded or Failed"

    Sep  2 12:12:44.028: INFO: Pod "pod-a9c45dab-5a45-48fe-be57-a10ce9ffeff9": Phase="Pending", Reason="", readiness=false. Elapsed: 5.018474ms
    Sep  2 12:12:46.032: INFO: Pod "pod-a9c45dab-5a45-48fe-be57-a10ce9ffeff9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009329745s
    Sep  2 12:12:48.036: INFO: Pod "pod-a9c45dab-5a45-48fe-be57-a10ce9ffeff9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013306138s
    STEP: Saw pod success
    Sep  2 12:12:48.036: INFO: Pod "pod-a9c45dab-5a45-48fe-be57-a10ce9ffeff9" satisfied condition "Succeeded or Failed"

    Sep  2 12:12:48.040: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-7zrh5 pod pod-a9c45dab-5a45-48fe-be57-a10ce9ffeff9 container test-container: <nil>
    STEP: delete the pod
    Sep  2 12:12:48.059: INFO: Waiting for pod pod-a9c45dab-5a45-48fe-be57-a10ce9ffeff9 to disappear
    Sep  2 12:12:48.063: INFO: Pod pod-a9c45dab-5a45-48fe-be57-a10ce9ffeff9 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:12:48.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-2678" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":55,"skipped":974,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with downward pod [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-downwardapi-rlz9
    STEP: Creating a pod to test atomic-volume-subpath
    Sep  2 12:12:26.692: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-rlz9" in namespace "subpath-2079" to be "Succeeded or Failed"

    Sep  2 12:12:26.696: INFO: Pod "pod-subpath-test-downwardapi-rlz9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101506ms
    Sep  2 12:12:28.702: INFO: Pod "pod-subpath-test-downwardapi-rlz9": Phase="Running", Reason="", readiness=true. Elapsed: 2.00948978s
    Sep  2 12:12:30.709: INFO: Pod "pod-subpath-test-downwardapi-rlz9": Phase="Running", Reason="", readiness=true. Elapsed: 4.016167633s
    Sep  2 12:12:32.713: INFO: Pod "pod-subpath-test-downwardapi-rlz9": Phase="Running", Reason="", readiness=true. Elapsed: 6.020227688s
    Sep  2 12:12:34.717: INFO: Pod "pod-subpath-test-downwardapi-rlz9": Phase="Running", Reason="", readiness=true. Elapsed: 8.025107113s
    Sep  2 12:12:36.723: INFO: Pod "pod-subpath-test-downwardapi-rlz9": Phase="Running", Reason="", readiness=true. Elapsed: 10.03096374s
... skipping 2 lines ...
    Sep  2 12:12:42.740: INFO: Pod "pod-subpath-test-downwardapi-rlz9": Phase="Running", Reason="", readiness=true. Elapsed: 16.047201048s
    Sep  2 12:12:44.744: INFO: Pod "pod-subpath-test-downwardapi-rlz9": Phase="Running", Reason="", readiness=true. Elapsed: 18.051904418s
    Sep  2 12:12:46.750: INFO: Pod "pod-subpath-test-downwardapi-rlz9": Phase="Running", Reason="", readiness=true. Elapsed: 20.057815911s
    Sep  2 12:12:48.759: INFO: Pod "pod-subpath-test-downwardapi-rlz9": Phase="Running", Reason="", readiness=false. Elapsed: 22.066842045s
    Sep  2 12:12:50.763: INFO: Pod "pod-subpath-test-downwardapi-rlz9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.070763758s
    STEP: Saw pod success
    Sep  2 12:12:50.763: INFO: Pod "pod-subpath-test-downwardapi-rlz9" satisfied condition "Succeeded or Failed"

    Sep  2 12:12:50.797: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-7zrh5 pod pod-subpath-test-downwardapi-rlz9 container test-container-subpath-downwardapi-rlz9: <nil>
    STEP: delete the pod
    Sep  2 12:12:50.823: INFO: Waiting for pod pod-subpath-test-downwardapi-rlz9 to disappear
    Sep  2 12:12:50.827: INFO: Pod pod-subpath-test-downwardapi-rlz9 no longer exists
    STEP: Deleting pod pod-subpath-test-downwardapi-rlz9
    Sep  2 12:12:50.827: INFO: Deleting pod "pod-subpath-test-downwardapi-rlz9" in namespace "subpath-2079"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:12:50.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-2079" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":35,"skipped":643,"failed":4,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:13:08.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-2296" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":36,"skipped":799,"failed":4,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:13:10.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-7060" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":806,"failed":4,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide container's cpu limit [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  2 12:13:10.234: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e8afaeec-c32b-4d2b-afcb-2a95bd219f55" in namespace "projected-9993" to be "Succeeded or Failed"

    Sep  2 12:13:10.238: INFO: Pod "downwardapi-volume-e8afaeec-c32b-4d2b-afcb-2a95bd219f55": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089338ms
    Sep  2 12:13:12.246: INFO: Pod "downwardapi-volume-e8afaeec-c32b-4d2b-afcb-2a95bd219f55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011312227s
    Sep  2 12:13:14.251: INFO: Pod "downwardapi-volume-e8afaeec-c32b-4d2b-afcb-2a95bd219f55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01685356s
    STEP: Saw pod success
    Sep  2 12:13:14.251: INFO: Pod "downwardapi-volume-e8afaeec-c32b-4d2b-afcb-2a95bd219f55" satisfied condition "Succeeded or Failed"

    Sep  2 12:13:14.255: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-j45dw pod downwardapi-volume-e8afaeec-c32b-4d2b-afcb-2a95bd219f55 container client-container: <nil>
    STEP: delete the pod
    Sep  2 12:13:14.278: INFO: Waiting for pod downwardapi-volume-e8afaeec-c32b-4d2b-afcb-2a95bd219f55 to disappear
    Sep  2 12:13:14.283: INFO: Pod downwardapi-volume-e8afaeec-c32b-4d2b-afcb-2a95bd219f55 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:13:14.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-9993" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":807,"failed":4,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:13:14.381: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable via environment variable [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap configmap-3562/configmap-test-243adbff-9289-4e13-ba42-2dbb225e9b50
    STEP: Creating a pod to test consume configMaps
    Sep  2 12:13:14.451: INFO: Waiting up to 5m0s for pod "pod-configmaps-0fec0426-fdfa-4800-8666-4bb515aa5b50" in namespace "configmap-3562" to be "Succeeded or Failed"

    Sep  2 12:13:14.455: INFO: Pod "pod-configmaps-0fec0426-fdfa-4800-8666-4bb515aa5b50": Phase="Pending", Reason="", readiness=false. Elapsed: 4.291471ms
    Sep  2 12:13:16.461: INFO: Pod "pod-configmaps-0fec0426-fdfa-4800-8666-4bb515aa5b50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00986901s
    Sep  2 12:13:18.466: INFO: Pod "pod-configmaps-0fec0426-fdfa-4800-8666-4bb515aa5b50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015144951s
    STEP: Saw pod success
    Sep  2 12:13:18.466: INFO: Pod "pod-configmaps-0fec0426-fdfa-4800-8666-4bb515aa5b50" satisfied condition "Succeeded or Failed"

    Sep  2 12:13:18.471: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-worker-k0ejuu pod pod-configmaps-0fec0426-fdfa-4800-8666-4bb515aa5b50 container env-test: <nil>
    STEP: delete the pod
    Sep  2 12:13:18.508: INFO: Waiting for pod pod-configmaps-0fec0426-fdfa-4800-8666-4bb515aa5b50 to disappear
    Sep  2 12:13:18.512: INFO: Pod pod-configmaps-0fec0426-fdfa-4800-8666-4bb515aa5b50 no longer exists
    [AfterEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:13:18.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-3562" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":846,"failed":4,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 16 lines ...
    Sep  2 12:12:52.256: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:12:52.261: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:12:52.297: INFO: Unable to read jessie_udp@dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:12:52.302: INFO: Unable to read jessie_tcp@dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:12:52.306: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:12:52.313: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:12:52.344: INFO: Lookups using dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db failed for: [wheezy_udp@dns-test-service.dns-3616.svc.cluster.local wheezy_tcp@dns-test-service.dns-3616.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local jessie_udp@dns-test-service.dns-3616.svc.cluster.local jessie_tcp@dns-test-service.dns-3616.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local]

    
    Sep  2 12:12:57.350: INFO: Unable to read wheezy_udp@dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:12:57.354: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:12:57.357: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:12:57.362: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:12:57.389: INFO: Unable to read jessie_udp@dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:12:57.393: INFO: Unable to read jessie_tcp@dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:12:57.396: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:12:57.399: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:12:57.419: INFO: Lookups using dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db failed for: [wheezy_udp@dns-test-service.dns-3616.svc.cluster.local wheezy_tcp@dns-test-service.dns-3616.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local jessie_udp@dns-test-service.dns-3616.svc.cluster.local jessie_tcp@dns-test-service.dns-3616.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local]

    
    Sep  2 12:13:02.352: INFO: Unable to read wheezy_udp@dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:13:02.357: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:13:02.362: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:13:02.368: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:13:02.409: INFO: Unable to read jessie_udp@dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:13:02.413: INFO: Unable to read jessie_tcp@dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:13:02.419: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:13:02.423: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:13:02.456: INFO: Lookups using dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db failed for: [wheezy_udp@dns-test-service.dns-3616.svc.cluster.local wheezy_tcp@dns-test-service.dns-3616.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local jessie_udp@dns-test-service.dns-3616.svc.cluster.local jessie_tcp@dns-test-service.dns-3616.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local]

    
    Sep  2 12:13:07.351: INFO: Unable to read wheezy_udp@dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:13:07.360: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:13:07.367: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:13:07.372: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:13:07.411: INFO: Unable to read jessie_udp@dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:13:07.416: INFO: Unable to read jessie_tcp@dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:13:07.421: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:13:07.437: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:13:07.470: INFO: Lookups using dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db failed for: [wheezy_udp@dns-test-service.dns-3616.svc.cluster.local wheezy_tcp@dns-test-service.dns-3616.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local jessie_udp@dns-test-service.dns-3616.svc.cluster.local jessie_tcp@dns-test-service.dns-3616.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local]

    
    Sep  2 12:13:12.351: INFO: Unable to read wheezy_udp@dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:13:12.356: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:13:12.363: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:13:12.368: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:13:12.413: INFO: Unable to read jessie_udp@dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:13:12.419: INFO: Unable to read jessie_tcp@dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:13:12.424: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:13:12.428: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:13:12.463: INFO: Lookups using dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db failed for: [wheezy_udp@dns-test-service.dns-3616.svc.cluster.local wheezy_tcp@dns-test-service.dns-3616.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local jessie_udp@dns-test-service.dns-3616.svc.cluster.local jessie_tcp@dns-test-service.dns-3616.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local]

    
    Sep  2 12:13:17.349: INFO: Unable to read wheezy_udp@dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:13:17.354: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:13:17.359: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:13:17.365: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:13:17.398: INFO: Unable to read jessie_udp@dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:13:17.401: INFO: Unable to read jessie_tcp@dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:13:17.405: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:13:17.409: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local from pod dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db: the server could not find the requested resource (get pods dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db)
    Sep  2 12:13:17.433: INFO: Lookups using dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db failed for: [wheezy_udp@dns-test-service.dns-3616.svc.cluster.local wheezy_tcp@dns-test-service.dns-3616.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local jessie_udp@dns-test-service.dns-3616.svc.cluster.local jessie_tcp@dns-test-service.dns-3616.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3616.svc.cluster.local]

    
    Sep  2 12:13:22.450: INFO: DNS probes using dns-3616/dns-test-7d0aeef6-adfc-46bb-b38e-c4054765d4db succeeded
    
    STEP: deleting the pod
    STEP: deleting the test service
    STEP: deleting the test headless service
    [AfterEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:13:22.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-3616" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":-1,"completed":56,"skipped":1000,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:13:22.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-766" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":57,"skipped":1039,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 34 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:13:23.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-5752" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]","total":-1,"completed":40,"skipped":868,"failed":4,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:13:26.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-2550" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":58,"skipped":1045,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:13:27.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-6477" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":41,"skipped":869,"failed":4,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide container's memory request [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  2 12:13:27.014: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ad03ef95-0898-4e4f-bc3d-0b7d87bbc926" in namespace "downward-api-8815" to be "Succeeded or Failed"

    Sep  2 12:13:27.017: INFO: Pod "downwardapi-volume-ad03ef95-0898-4e4f-bc3d-0b7d87bbc926": Phase="Pending", Reason="", readiness=false. Elapsed: 2.819285ms
    Sep  2 12:13:29.023: INFO: Pod "downwardapi-volume-ad03ef95-0898-4e4f-bc3d-0b7d87bbc926": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008649696s
    Sep  2 12:13:31.028: INFO: Pod "downwardapi-volume-ad03ef95-0898-4e4f-bc3d-0b7d87bbc926": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013101422s
    STEP: Saw pod success
    Sep  2 12:13:31.028: INFO: Pod "downwardapi-volume-ad03ef95-0898-4e4f-bc3d-0b7d87bbc926" satisfied condition "Succeeded or Failed"

    Sep  2 12:13:31.031: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-worker-k0ejuu pod downwardapi-volume-ad03ef95-0898-4e4f-bc3d-0b7d87bbc926 container client-container: <nil>
    STEP: delete the pod
    Sep  2 12:13:31.047: INFO: Waiting for pod downwardapi-volume-ad03ef95-0898-4e4f-bc3d-0b7d87bbc926 to disappear
    Sep  2 12:13:31.049: INFO: Pod downwardapi-volume-ad03ef95-0898-4e4f-bc3d-0b7d87bbc926 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:13:31.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-8815" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":59,"skipped":1064,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
    STEP: Destroying namespace "services-284" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":60,"skipped":1074,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:13:38.605: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context-test
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
    [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep  2 12:13:38.648: INFO: Waiting up to 5m0s for pod "busybox-user-65534-0a2569e2-22ad-4d1f-beae-b330fdd168e6" in namespace "security-context-test-3779" to be "Succeeded or Failed"

    Sep  2 12:13:38.652: INFO: Pod "busybox-user-65534-0a2569e2-22ad-4d1f-beae-b330fdd168e6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.417651ms
    Sep  2 12:13:40.657: INFO: Pod "busybox-user-65534-0a2569e2-22ad-4d1f-beae-b330fdd168e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00869681s
    Sep  2 12:13:42.661: INFO: Pod "busybox-user-65534-0a2569e2-22ad-4d1f-beae-b330fdd168e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01328455s
    Sep  2 12:13:42.661: INFO: Pod "busybox-user-65534-0a2569e2-22ad-4d1f-beae-b330fdd168e6" satisfied condition "Succeeded or Failed"

    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:13:42.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-test-3779" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":61,"skipped":1076,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 16 lines ...
    STEP: Registering slow webhook via the AdmissionRegistration API
    Sep  2 12:13:41.466: INFO: Waiting for webhook configuration to be ready...
    Sep  2 12:13:51.578: INFO: Waiting for webhook configuration to be ready...
    Sep  2 12:14:01.679: INFO: Waiting for webhook configuration to be ready...
    Sep  2 12:14:11.777: INFO: Waiting for webhook configuration to be ready...
    Sep  2 12:14:21.787: INFO: Waiting for webhook configuration to be ready...
    Sep  2 12:14:21.788: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc0002ba280>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 23 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should honor timeout [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  2 12:14:21.788: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc0002ba280>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:2188
    ------------------------------
    {"msg":"FAILED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":82,"skipped":1674,"failed":4,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]"]}

    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:09:25.436: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename dns
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 6 lines ...
    
    STEP: creating a pod to probe DNS
    STEP: submitting the pod to kubernetes
    STEP: retrieving the pod
    STEP: looking for the results for each expected name from probers
    Sep  2 12:13:01.325: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-6913.svc.cluster.local from pod dns-6913/dns-test-f9498f22-349f-47ba-996d-6b05474357ce: the server is currently unable to handle the request (get pods dns-test-f9498f22-349f-47ba-996d-6b05474357ce)
    Sep  2 12:14:27.516: FAIL: Unable to read jessie_udp@dns-test-service-3.dns-6913.svc.cluster.local from pod dns-6913/dns-test-f9498f22-349f-47ba-996d-6b05474357ce: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-6913/pods/dns-test-f9498f22-349f-47ba-996d-6b05474357ce/proxy/results/jessie_udp@dns-test-service-3.dns-6913.svc.cluster.local": context deadline exceeded

    
    Full Stack Trace
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x78de4a8, 0xc000054090, 0x7f2d50303108, 0x18, 0xc001114360)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x78de4a8, 0xc000054090, 0xc0037d0450, 0x2a14500, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f
... skipping 15 lines ...
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b
    testing.tRunner(0xc00031bb00, 0x729a2d8)
    	/usr/local/go/src/testing/testing.go:1203 +0xe5
    created by testing.(*T).Run
    	/usr/local/go/src/testing/testing.go:1248 +0x2b3
    E0902 12:14:27.517676      19 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Sep  2 12:14:27.516: Unable to read jessie_udp@dns-test-service-3.dns-6913.svc.cluster.local from pod dns-6913/dns-test-f9498f22-349f-47ba-996d-6b05474357ce: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-6913/pods/dns-test-f9498f22-349f-47ba-996d-6b05474357ce/proxy/results/jessie_udp@dns-test-service-3.dns-6913.svc.cluster.local\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:217, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x78de4a8, 0xc000054090, 0x7f2d50303108, 0x18, 0xc001114360)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x78de4a8, 0xc000054090, 0xc0037d0450, 0x2a14500, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll(0x78de4a8, 0xc000054090, 0xc001114301, 0xc001114360, 0xc0037d0450, 0x6826620, 0xc0037d0450)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:577 +0xe5\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext(0x78de4a8, 0xc000054090, 0x12a05f200, 0x8bb2c97000, 0xc0037d0450, 0x6d6e4e0, 0x2521201)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:523 +0x66\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc002222620, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:509 +0x6f\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc0026947c0, 0x2, 0x2, 0x702fe9b, 0x7, 0xc003514400, 0x7971668, 0xc00405cf20, 0x1, 0x70515b7, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463 +0x13c\nk8s.io/kubernetes/test/e2e/network.validateTargetedProbeOutput(0xc000d39600, 0xc003514400, 0xc0026947c0, 0x2, 0x2, 0x70515b7, 0x10)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:548 +0x376\nk8s.io/kubernetes/test/e2e/network.glob..func2.9()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:354 +0x6ed\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc00031bb00)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc00031bb00)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b\ntesting.tRunner(0xc00031bb00, 0x729a2d8)\n\t/usr/local/go/src/testing/testing.go:1203 +0xe5\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1248 +0x2b3"} (
    Your test failed.

    Ginkgo panics to prevent subsequent assertions from running.
    Normally Ginkgo rescues this panic so you shouldn't see it.
    
    But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
    To circumvent this, you should call
    
... skipping 5 lines ...
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6bbe4c0, 0xc0026084c0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
    panic(0x6bbe4c0, 0xc0026084c0)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc003476300, 0x16b, 0x88abe86, 0x7d, 0xd9, 0xc000d76a80, 0x9fe)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
    panic(0x62ef260, 0x77956f0)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc003476300, 0x16b, 0xc003935e88, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:260 +0xc8
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc003476300, 0x16b, 0xc003935f70, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
    k8s.io/kubernetes/test/e2e/framework.Failf(0x70d3e4f, 0x24, 0xc0039361d0, 0x4, 0x4)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
    k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:480 +0xab1
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x78de4a8, 0xc000054090, 0x7f2d50303108, 0x18, 0xc001114360)
... skipping 57 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  2 12:14:27.516: Unable to read jessie_udp@dns-test-service-3.dns-6913.svc.cluster.local from pod dns-6913/dns-test-f9498f22-349f-47ba-996d-6b05474357ce: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-6913/pods/dns-test-f9498f22-349f-47ba-996d-6b05474357ce/proxy/results/jessie_udp@dns-test-service-3.dns-6913.svc.cluster.local": context deadline exceeded
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":41,"skipped":876,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:14:21.885: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 9 lines ...
    Sep  2 12:14:25.768: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
    [It] should honor timeout [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Setting timeout (1s) shorter than webhook latency (5s)
    STEP: Registering slow webhook via the AdmissionRegistration API
    STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
    STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore

    STEP: Registering slow webhook via the AdmissionRegistration API
    STEP: Having no error when timeout is longer than webhook latency

    STEP: Registering slow webhook via the AdmissionRegistration API
    STEP: Having no error when timeout is empty (defaulted to 10s in v1)

    STEP: Registering slow webhook via the AdmissionRegistration API
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:14:37.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "webhook-42" for this suite.
    STEP: Destroying namespace "webhook-42-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":42,"skipped":876,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  2 12:14:38.054: INFO: Waiting up to 5m0s for pod "downwardapi-volume-457156b8-3bde-49d0-b496-cb9749902ad0" in namespace "downward-api-5860" to be "Succeeded or Failed"

    Sep  2 12:14:38.058: INFO: Pod "downwardapi-volume-457156b8-3bde-49d0-b496-cb9749902ad0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.965939ms
    Sep  2 12:14:40.064: INFO: Pod "downwardapi-volume-457156b8-3bde-49d0-b496-cb9749902ad0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009244928s
    Sep  2 12:14:42.069: INFO: Pod "downwardapi-volume-457156b8-3bde-49d0-b496-cb9749902ad0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01436255s
    STEP: Saw pod success
    Sep  2 12:14:42.069: INFO: Pod "downwardapi-volume-457156b8-3bde-49d0-b496-cb9749902ad0" satisfied condition "Succeeded or Failed"

    Sep  2 12:14:42.072: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-j45dw pod downwardapi-volume-457156b8-3bde-49d0-b496-cb9749902ad0 container client-container: <nil>
    STEP: delete the pod
    Sep  2 12:14:42.089: INFO: Waiting for pod downwardapi-volume-457156b8-3bde-49d0-b496-cb9749902ad0 to disappear
    Sep  2 12:14:42.093: INFO: Pod downwardapi-volume-457156b8-3bde-49d0-b496-cb9749902ad0 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:14:42.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-5860" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":898,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:14:42.178: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-5b622f57-5590-4374-9394-acffb6745ada
    STEP: Creating a pod to test consume configMaps
    Sep  2 12:14:42.232: INFO: Waiting up to 5m0s for pod "pod-configmaps-2971d236-6c64-4f16-aaf9-2c2f9a8b1cb6" in namespace "configmap-4215" to be "Succeeded or Failed"

    Sep  2 12:14:42.236: INFO: Pod "pod-configmaps-2971d236-6c64-4f16-aaf9-2c2f9a8b1cb6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.33521ms
    Sep  2 12:14:44.241: INFO: Pod "pod-configmaps-2971d236-6c64-4f16-aaf9-2c2f9a8b1cb6": Phase="Running", Reason="", readiness=false. Elapsed: 2.009291299s
    Sep  2 12:14:46.246: INFO: Pod "pod-configmaps-2971d236-6c64-4f16-aaf9-2c2f9a8b1cb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014324043s
    STEP: Saw pod success
    Sep  2 12:14:46.246: INFO: Pod "pod-configmaps-2971d236-6c64-4f16-aaf9-2c2f9a8b1cb6" satisfied condition "Succeeded or Failed"

    Sep  2 12:14:46.251: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-j45dw pod pod-configmaps-2971d236-6c64-4f16-aaf9-2c2f9a8b1cb6 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  2 12:14:46.274: INFO: Waiting for pod pod-configmaps-2971d236-6c64-4f16-aaf9-2c2f9a8b1cb6 to disappear
    Sep  2 12:14:46.280: INFO: Pod pod-configmaps-2971d236-6c64-4f16-aaf9-2c2f9a8b1cb6 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:14:46.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-4215" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":933,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Ingress API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:14:46.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "ingress-4810" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":45,"skipped":939,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:15:06.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-7585" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":46,"skipped":942,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:15:06.642: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 22 lines ...
    STEP: Destroying namespace "webhook-3981-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":47,"skipped":942,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
    STEP: Destroying namespace "webhook-9761-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":48,"skipped":952,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Discovery
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 85 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:15:17.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "discovery-6040" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":49,"skipped":962,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 34 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:15:23.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-7997" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":50,"skipped":985,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:15:23.879: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename containers
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test override all
    Sep  2 12:15:23.916: INFO: Waiting up to 5m0s for pod "client-containers-e66caa62-e728-40ce-b060-ddd8be1a2091" in namespace "containers-1138" to be "Succeeded or Failed"

    Sep  2 12:15:23.919: INFO: Pod "client-containers-e66caa62-e728-40ce-b060-ddd8be1a2091": Phase="Pending", Reason="", readiness=false. Elapsed: 3.057661ms
    Sep  2 12:15:25.924: INFO: Pod "client-containers-e66caa62-e728-40ce-b060-ddd8be1a2091": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008051426s
    Sep  2 12:15:27.928: INFO: Pod "client-containers-e66caa62-e728-40ce-b060-ddd8be1a2091": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012445713s
    STEP: Saw pod success
    Sep  2 12:15:27.929: INFO: Pod "client-containers-e66caa62-e728-40ce-b060-ddd8be1a2091" satisfied condition "Succeeded or Failed"

    Sep  2 12:15:27.932: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-j45dw pod client-containers-e66caa62-e728-40ce-b060-ddd8be1a2091 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  2 12:15:27.953: INFO: Waiting for pod client-containers-e66caa62-e728-40ce-b060-ddd8be1a2091 to disappear
    Sep  2 12:15:27.956: INFO: Pod client-containers-e66caa62-e728-40ce-b060-ddd8be1a2091 no longer exists
    [AfterEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:15:27.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "containers-1138" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":51,"skipped":990,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
    STEP: Destroying namespace "webhook-4116-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":52,"skipped":1005,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:15:31.581: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should allow substituting values in a container's command [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test substitution in container's command
    Sep  2 12:15:31.633: INFO: Waiting up to 5m0s for pod "var-expansion-aa50e47b-84db-4461-9f7b-21e617e91801" in namespace "var-expansion-1394" to be "Succeeded or Failed"

    Sep  2 12:15:31.639: INFO: Pod "var-expansion-aa50e47b-84db-4461-9f7b-21e617e91801": Phase="Pending", Reason="", readiness=false. Elapsed: 5.590352ms
    Sep  2 12:15:33.644: INFO: Pod "var-expansion-aa50e47b-84db-4461-9f7b-21e617e91801": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010727044s
    Sep  2 12:15:35.649: INFO: Pod "var-expansion-aa50e47b-84db-4461-9f7b-21e617e91801": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015997372s
    STEP: Saw pod success
    Sep  2 12:15:35.649: INFO: Pod "var-expansion-aa50e47b-84db-4461-9f7b-21e617e91801" satisfied condition "Succeeded or Failed"

    Sep  2 12:15:35.652: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-j45dw pod var-expansion-aa50e47b-84db-4461-9f7b-21e617e91801 container dapi-container: <nil>
    STEP: delete the pod
    Sep  2 12:15:35.672: INFO: Waiting for pod var-expansion-aa50e47b-84db-4461-9f7b-21e617e91801 to disappear
    Sep  2 12:15:35.675: INFO: Pod var-expansion-aa50e47b-84db-4461-9f7b-21e617e91801 no longer exists
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:15:35.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-1394" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":53,"skipped":1021,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:15:48.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-7907" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":54,"skipped":1023,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:15:48.817: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-d410c640-8e0b-4e9a-8644-6650a362fa9d
    STEP: Creating a pod to test consume secrets
    Sep  2 12:15:48.876: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-dc3806bb-8ed8-488c-92bb-337a10fd1287" in namespace "projected-5458" to be "Succeeded or Failed"

    Sep  2 12:15:48.882: INFO: Pod "pod-projected-secrets-dc3806bb-8ed8-488c-92bb-337a10fd1287": Phase="Pending", Reason="", readiness=false. Elapsed: 5.532609ms
    Sep  2 12:15:50.886: INFO: Pod "pod-projected-secrets-dc3806bb-8ed8-488c-92bb-337a10fd1287": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009546306s
    Sep  2 12:15:52.891: INFO: Pod "pod-projected-secrets-dc3806bb-8ed8-488c-92bb-337a10fd1287": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014405986s
    STEP: Saw pod success
    Sep  2 12:15:52.891: INFO: Pod "pod-projected-secrets-dc3806bb-8ed8-488c-92bb-337a10fd1287" satisfied condition "Succeeded or Failed"

    Sep  2 12:15:52.894: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-j45dw pod pod-projected-secrets-dc3806bb-8ed8-488c-92bb-337a10fd1287 container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep  2 12:15:52.911: INFO: Waiting for pod pod-projected-secrets-dc3806bb-8ed8-488c-92bb-337a10fd1287 to disappear
    Sep  2 12:15:52.914: INFO: Pod pod-projected-secrets-dc3806bb-8ed8-488c-92bb-337a10fd1287 no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:15:52.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-5458" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":55,"skipped":1028,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:15:52.966: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-map-c18a4ca4-e92d-465d-949a-8060f0bdaf12
    STEP: Creating a pod to test consume secrets
    Sep  2 12:15:53.023: INFO: Waiting up to 5m0s for pod "pod-secrets-4b0b5712-fdf0-4243-91c8-f5a1e8a63cbe" in namespace "secrets-839" to be "Succeeded or Failed"

    Sep  2 12:15:53.029: INFO: Pod "pod-secrets-4b0b5712-fdf0-4243-91c8-f5a1e8a63cbe": Phase="Pending", Reason="", readiness=false. Elapsed: 5.624325ms
    Sep  2 12:15:55.033: INFO: Pod "pod-secrets-4b0b5712-fdf0-4243-91c8-f5a1e8a63cbe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010172136s
    Sep  2 12:15:57.037: INFO: Pod "pod-secrets-4b0b5712-fdf0-4243-91c8-f5a1e8a63cbe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014193483s
    STEP: Saw pod success
    Sep  2 12:15:57.037: INFO: Pod "pod-secrets-4b0b5712-fdf0-4243-91c8-f5a1e8a63cbe" satisfied condition "Succeeded or Failed"

    Sep  2 12:15:57.040: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-j45dw pod pod-secrets-4b0b5712-fdf0-4243-91c8-f5a1e8a63cbe container secret-volume-test: <nil>
    STEP: delete the pod
    Sep  2 12:15:57.054: INFO: Waiting for pod pod-secrets-4b0b5712-fdf0-4243-91c8-f5a1e8a63cbe to disappear
    Sep  2 12:15:57.059: INFO: Pod pod-secrets-4b0b5712-fdf0-4243-91c8-f5a1e8a63cbe no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:15:57.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-839" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":56,"skipped":1051,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] RuntimeClass
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 19 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:15:57.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "runtimeclass-2397" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] RuntimeClass  should support RuntimeClasses API operations [Conformance]","total":-1,"completed":57,"skipped":1102,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:16:02.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "init-container-3843" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":58,"skipped":1111,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:16:02.437: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context-test
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
    [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep  2 12:16:02.479: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-764df467-8139-4d6e-9a72-c98ef0319817" in namespace "security-context-test-3702" to be "Succeeded or Failed"

    Sep  2 12:16:02.482: INFO: Pod "busybox-privileged-false-764df467-8139-4d6e-9a72-c98ef0319817": Phase="Pending", Reason="", readiness=false. Elapsed: 2.668629ms
    Sep  2 12:16:04.486: INFO: Pod "busybox-privileged-false-764df467-8139-4d6e-9a72-c98ef0319817": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007290622s
    Sep  2 12:16:06.491: INFO: Pod "busybox-privileged-false-764df467-8139-4d6e-9a72-c98ef0319817": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011747019s
    Sep  2 12:16:06.491: INFO: Pod "busybox-privileged-false-764df467-8139-4d6e-9a72-c98ef0319817" satisfied condition "Succeeded or Failed"

    Sep  2 12:16:06.497: INFO: Got logs for pod "busybox-privileged-false-764df467-8139-4d6e-9a72-c98ef0319817": "ip: RTNETLINK answers: Operation not permitted\n"
    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:16:06.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-test-3702" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":59,"skipped":1143,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:16:10.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-6650" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":60,"skipped":1149,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 47 lines ...
    STEP: Destroying namespace "services-6300" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":61,"skipped":1159,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] CronJob
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
    • [SLOW TEST:300.096 seconds]
    [sig-apps] CronJob
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
      should not schedule jobs when suspended [Slow] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":-1,"completed":62,"skipped":1097,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:18:45.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-1117" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":63,"skipped":1115,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
    • [SLOW TEST:152.688 seconds]
    [sig-node] Probing container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should have monotonically increasing restart count [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":62,"skipped":1171,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 61 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:19:06.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-4639" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":-1,"completed":63,"skipped":1185,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide container's cpu request [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  2 12:19:07.034: INFO: Waiting up to 5m0s for pod "downwardapi-volume-631ecb70-cb19-4018-81be-52c3627f23e8" in namespace "projected-1855" to be "Succeeded or Failed"

    Sep  2 12:19:07.041: INFO: Pod "downwardapi-volume-631ecb70-cb19-4018-81be-52c3627f23e8": Phase="Pending", Reason="", readiness=false. Elapsed: 7.492569ms
    Sep  2 12:19:09.049: INFO: Pod "downwardapi-volume-631ecb70-cb19-4018-81be-52c3627f23e8": Phase="Running", Reason="", readiness=true. Elapsed: 2.015340347s
    Sep  2 12:19:11.060: INFO: Pod "downwardapi-volume-631ecb70-cb19-4018-81be-52c3627f23e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025658428s
    STEP: Saw pod success
    Sep  2 12:19:11.060: INFO: Pod "downwardapi-volume-631ecb70-cb19-4018-81be-52c3627f23e8" satisfied condition "Succeeded or Failed"

    Sep  2 12:19:11.067: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-j45dw pod downwardapi-volume-631ecb70-cb19-4018-81be-52c3627f23e8 container client-container: <nil>
    STEP: delete the pod
    Sep  2 12:19:11.110: INFO: Waiting for pod downwardapi-volume-631ecb70-cb19-4018-81be-52c3627f23e8 to disappear
    Sep  2 12:19:11.116: INFO: Pod downwardapi-volume-631ecb70-cb19-4018-81be-52c3627f23e8 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:19:11.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-1855" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":64,"skipped":1232,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:19:11.163: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep  2 12:19:11.249: INFO: Waiting up to 5m0s for pod "downward-api-359bfa2b-4fba-45f4-a428-f2c5e86b5a7f" in namespace "downward-api-8087" to be "Succeeded or Failed"

    Sep  2 12:19:11.258: INFO: Pod "downward-api-359bfa2b-4fba-45f4-a428-f2c5e86b5a7f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.402878ms
    Sep  2 12:19:13.267: INFO: Pod "downward-api-359bfa2b-4fba-45f4-a428-f2c5e86b5a7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018428227s
    Sep  2 12:19:15.274: INFO: Pod "downward-api-359bfa2b-4fba-45f4-a428-f2c5e86b5a7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025468502s
    STEP: Saw pod success
    Sep  2 12:19:15.274: INFO: Pod "downward-api-359bfa2b-4fba-45f4-a428-f2c5e86b5a7f" satisfied condition "Succeeded or Failed"

    Sep  2 12:19:15.279: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-j45dw pod downward-api-359bfa2b-4fba-45f4-a428-f2c5e86b5a7f container dapi-container: <nil>
    STEP: delete the pod
    Sep  2 12:19:15.306: INFO: Waiting for pod downward-api-359bfa2b-4fba-45f4-a428-f2c5e86b5a7f to disappear
    Sep  2 12:19:15.310: INFO: Pod downward-api-359bfa2b-4fba-45f4-a428-f2c5e86b5a7f no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:19:15.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-8087" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":65,"skipped":1239,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:19:25.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-51" for this suite.
    
    •
    ------------------------------
    {"msg":"FAILED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":82,"skipped":1674,"failed":5,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]"]}

    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:14:27.558: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename dns
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 6 lines ...
    
    STEP: creating a pod to probe DNS
    STEP: submitting the pod to kubernetes
    STEP: retrieving the pod
    STEP: looking for the results for each expected name from probers
    Sep  2 12:18:02.382: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-2012.svc.cluster.local from pod dns-2012/dns-test-91942532-c40d-450b-9879-743293c9f07f: the server is currently unable to handle the request (get pods dns-test-91942532-c40d-450b-9879-743293c9f07f)
    Sep  2 12:19:29.624: FAIL: Unable to read jessie_udp@dns-test-service-3.dns-2012.svc.cluster.local from pod dns-2012/dns-test-91942532-c40d-450b-9879-743293c9f07f: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-2012/pods/dns-test-91942532-c40d-450b-9879-743293c9f07f/proxy/results/jessie_udp@dns-test-service-3.dns-2012.svc.cluster.local": context deadline exceeded

    
    Full Stack Trace
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x78de4a8, 0xc000054090, 0x7f2d50303a68, 0x18, 0xc0019bd050)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x78de4a8, 0xc000054090, 0xc002307b80, 0x2a14500, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f
... skipping 15 lines ...
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b
    testing.tRunner(0xc00031bb00, 0x729a2d8)
    	/usr/local/go/src/testing/testing.go:1203 +0xe5
    created by testing.(*T).Run
    	/usr/local/go/src/testing/testing.go:1248 +0x2b3
    E0902 12:19:29.626000      19 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Sep  2 12:19:29.625: Unable to read jessie_udp@dns-test-service-3.dns-2012.svc.cluster.local from pod dns-2012/dns-test-91942532-c40d-450b-9879-743293c9f07f: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-2012/pods/dns-test-91942532-c40d-450b-9879-743293c9f07f/proxy/results/jessie_udp@dns-test-service-3.dns-2012.svc.cluster.local\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:217, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x78de4a8, 0xc000054090, 0x7f2d50303a68, 0x18, 0xc0019bd050)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x78de4a8, 0xc000054090, 0xc002307b80, 0x2a14500, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll(0x78de4a8, 0xc000054090, 0xc0019bd001, 0xc0019bd050, 0xc002307b80, 0x6826620, 0xc002307b80)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:577 +0xe5\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext(0x78de4a8, 0xc000054090, 0x12a05f200, 0x8bb2c97000, 0xc002307b80, 0x6d6e4e0, 0x2521201)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:523 +0x66\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc003bd7b20, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:509 +0x6f\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc003edc0a0, 0x2, 0x2, 0x702fe9b, 0x7, 0xc000501800, 0x7971668, 0xc001b72b00, 0x1, 0x70515b7, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463 +0x13c\nk8s.io/kubernetes/test/e2e/network.validateTargetedProbeOutput(0xc000d39600, 0xc000501800, 0xc003edc0a0, 0x2, 0x2, 0x70515b7, 0x10)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:548 +0x376\nk8s.io/kubernetes/test/e2e/network.glob..func2.9()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:354 +0x6ed\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc00031bb00)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc00031bb00)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b\ntesting.tRunner(0xc00031bb00, 0x729a2d8)\n\t/usr/local/go/src/testing/testing.go:1203 +0xe5\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1248 +0x2b3"} (
    Your test failed.

    Ginkgo panics to prevent subsequent assertions from running.
    Normally Ginkgo rescues this panic so you shouldn't see it.
    
    But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
    To circumvent this, you should call
    
... skipping 5 lines ...
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6bbe4c0, 0xc0049b7540)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
    panic(0x6bbe4c0, 0xc0049b7540)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc0006d6180, 0x16b, 0x88abe86, 0x7d, 0xd9, 0xc0005bca80, 0x9fe)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
    panic(0x62ef260, 0x77956f0)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc0006d6180, 0x16b, 0xc003935e88, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:260 +0xc8
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc0006d6180, 0x16b, 0xc003935f70, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
    k8s.io/kubernetes/test/e2e/framework.Failf(0x70d3e4f, 0x24, 0xc0039361d0, 0x4, 0x4)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
    k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:480 +0xab1
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x78de4a8, 0xc000054090, 0x7f2d50303a68, 0x18, 0xc0019bd050)
... skipping 57 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  2 12:19:29.625: Unable to read jessie_udp@dns-test-service-3.dns-2012.svc.cluster.local from pod dns-2012/dns-test-91942532-c40d-450b-9879-743293c9f07f: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-2012/pods/dns-test-91942532-c40d-450b-9879-743293c9f07f/proxy/results/jessie_udp@dns-test-service-3.dns-2012.svc.cluster.local": context deadline exceeded
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217
    ------------------------------
    {"msg":"FAILED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":82,"skipped":1674,"failed":6,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]"]}

    
    SSSS
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":66,"skipped":1255,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:19:25.510: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-890b28ff-936e-4a66-a926-d0f484085d67
    STEP: Creating a pod to test consume configMaps
    Sep  2 12:19:25.607: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f24eca1d-2f73-4c2b-adde-3f6c408b8c55" in namespace "projected-6660" to be "Succeeded or Failed"

    Sep  2 12:19:25.614: INFO: Pod "pod-projected-configmaps-f24eca1d-2f73-4c2b-adde-3f6c408b8c55": Phase="Pending", Reason="", readiness=false. Elapsed: 7.550076ms
    Sep  2 12:19:27.622: INFO: Pod "pod-projected-configmaps-f24eca1d-2f73-4c2b-adde-3f6c408b8c55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015722144s
    Sep  2 12:19:29.631: INFO: Pod "pod-projected-configmaps-f24eca1d-2f73-4c2b-adde-3f6c408b8c55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024455923s
    STEP: Saw pod success
    Sep  2 12:19:29.631: INFO: Pod "pod-projected-configmaps-f24eca1d-2f73-4c2b-adde-3f6c408b8c55" satisfied condition "Succeeded or Failed"

    Sep  2 12:19:29.641: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-7zrh5 pod pod-projected-configmaps-f24eca1d-2f73-4c2b-adde-3f6c408b8c55 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  2 12:19:29.707: INFO: Waiting for pod pod-projected-configmaps-f24eca1d-2f73-4c2b-adde-3f6c408b8c55 to disappear
    Sep  2 12:19:29.715: INFO: Pod pod-projected-configmaps-f24eca1d-2f73-4c2b-adde-3f6c408b8c55 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:19:29.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-6660" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":67,"skipped":1255,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:19:33.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-1653" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":-1,"completed":68,"skipped":1312,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:19:33.788: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-5c2ea31f-0828-466b-852a-005e05a0300e
    STEP: Creating a pod to test consume configMaps
    Sep  2 12:19:33.865: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ded72505-5d45-42cb-92ad-4d1f74048f83" in namespace "projected-7186" to be "Succeeded or Failed"

    Sep  2 12:19:33.870: INFO: Pod "pod-projected-configmaps-ded72505-5d45-42cb-92ad-4d1f74048f83": Phase="Pending", Reason="", readiness=false. Elapsed: 5.768605ms
    Sep  2 12:19:35.879: INFO: Pod "pod-projected-configmaps-ded72505-5d45-42cb-92ad-4d1f74048f83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014016749s
    Sep  2 12:19:37.886: INFO: Pod "pod-projected-configmaps-ded72505-5d45-42cb-92ad-4d1f74048f83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021309505s
    STEP: Saw pod success
    Sep  2 12:19:37.886: INFO: Pod "pod-projected-configmaps-ded72505-5d45-42cb-92ad-4d1f74048f83" satisfied condition "Succeeded or Failed"

    Sep  2 12:19:37.890: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-j45dw pod pod-projected-configmaps-ded72505-5d45-42cb-92ad-4d1f74048f83 container projected-configmap-volume-test: <nil>
    STEP: delete the pod
    Sep  2 12:19:37.914: INFO: Waiting for pod pod-projected-configmaps-ded72505-5d45-42cb-92ad-4d1f74048f83 to disappear
    Sep  2 12:19:37.918: INFO: Pod pod-projected-configmaps-ded72505-5d45-42cb-92ad-4d1f74048f83 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:19:37.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-7186" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":69,"skipped":1322,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:19:38.045: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
    Sep  2 12:19:38.123: INFO: Waiting up to 5m0s for pod "security-context-59545007-8b07-4c1b-834f-80782b9d3843" in namespace "security-context-6554" to be "Succeeded or Failed"

    Sep  2 12:19:38.127: INFO: Pod "security-context-59545007-8b07-4c1b-834f-80782b9d3843": Phase="Pending", Reason="", readiness=false. Elapsed: 4.472968ms
    Sep  2 12:19:40.134: INFO: Pod "security-context-59545007-8b07-4c1b-834f-80782b9d3843": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011593818s
    Sep  2 12:19:42.142: INFO: Pod "security-context-59545007-8b07-4c1b-834f-80782b9d3843": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018933405s
    STEP: Saw pod success
    Sep  2 12:19:42.142: INFO: Pod "security-context-59545007-8b07-4c1b-834f-80782b9d3843" satisfied condition "Succeeded or Failed"

    Sep  2 12:19:42.148: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-j45dw pod security-context-59545007-8b07-4c1b-834f-80782b9d3843 container test-container: <nil>
    STEP: delete the pod
    Sep  2 12:19:42.178: INFO: Waiting for pod security-context-59545007-8b07-4c1b-834f-80782b9d3843 to disappear
    Sep  2 12:19:42.184: INFO: Pod security-context-59545007-8b07-4c1b-834f-80782b9d3843 no longer exists
    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:19:42.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-6554" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":70,"skipped":1359,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:19:42.209: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 20 lines ...
    STEP: Destroying namespace "webhook-9637-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":71,"skipped":1359,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 19 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:19:48.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-watch-5755" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":64,"skipped":1144,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 31 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:19:54.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-893" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":65,"skipped":1252,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:19:55.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-9456" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":66,"skipped":1260,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] KubeletManagedEtcHosts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 48 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:19:56.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "e2e-kubelet-etc-hosts-1221" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":72,"skipped":1396,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 36 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:19:57.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-7451" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":73,"skipped":1399,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:19:55.589: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-655a8fd5-1337-46ac-91e0-0f7b79bdcb5c
    STEP: Creating a pod to test consume secrets
    Sep  2 12:19:55.686: INFO: Waiting up to 5m0s for pod "pod-secrets-054a52ef-c673-4bfb-9857-1ede41892eb9" in namespace "secrets-4888" to be "Succeeded or Failed"

    Sep  2 12:19:55.692: INFO: Pod "pod-secrets-054a52ef-c673-4bfb-9857-1ede41892eb9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.187605ms
    Sep  2 12:19:57.699: INFO: Pod "pod-secrets-054a52ef-c673-4bfb-9857-1ede41892eb9": Phase="Running", Reason="", readiness=true. Elapsed: 2.011900045s
    Sep  2 12:19:59.706: INFO: Pod "pod-secrets-054a52ef-c673-4bfb-9857-1ede41892eb9": Phase="Running", Reason="", readiness=false. Elapsed: 4.018498638s
    Sep  2 12:20:01.726: INFO: Pod "pod-secrets-054a52ef-c673-4bfb-9857-1ede41892eb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.038453429s
    STEP: Saw pod success
    Sep  2 12:20:01.726: INFO: Pod "pod-secrets-054a52ef-c673-4bfb-9857-1ede41892eb9" satisfied condition "Succeeded or Failed"

    Sep  2 12:20:01.747: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-7zrh5 pod pod-secrets-054a52ef-c673-4bfb-9857-1ede41892eb9 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep  2 12:20:01.810: INFO: Waiting for pod pod-secrets-054a52ef-c673-4bfb-9857-1ede41892eb9 to disappear
    Sep  2 12:20:01.829: INFO: Pod pod-secrets-054a52ef-c673-4bfb-9857-1ede41892eb9 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:20:01.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-4888" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":67,"skipped":1276,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:19:57.764: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir volume type on tmpfs
    Sep  2 12:19:57.851: INFO: Waiting up to 5m0s for pod "pod-a60febdb-06df-460a-b263-c02f4d2c0734" in namespace "emptydir-6265" to be "Succeeded or Failed"

    Sep  2 12:19:57.859: INFO: Pod "pod-a60febdb-06df-460a-b263-c02f4d2c0734": Phase="Pending", Reason="", readiness=false. Elapsed: 7.970543ms
    Sep  2 12:19:59.866: INFO: Pod "pod-a60febdb-06df-460a-b263-c02f4d2c0734": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015536525s
    Sep  2 12:20:01.874: INFO: Pod "pod-a60febdb-06df-460a-b263-c02f4d2c0734": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023514214s
    STEP: Saw pod success
    Sep  2 12:20:01.875: INFO: Pod "pod-a60febdb-06df-460a-b263-c02f4d2c0734" satisfied condition "Succeeded or Failed"

    Sep  2 12:20:01.887: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-worker-k0ejuu pod pod-a60febdb-06df-460a-b263-c02f4d2c0734 container test-container: <nil>
    STEP: delete the pod
    Sep  2 12:20:02.008: INFO: Waiting for pod pod-a60febdb-06df-460a-b263-c02f4d2c0734 to disappear
    Sep  2 12:20:02.019: INFO: Pod pod-a60febdb-06df-460a-b263-c02f4d2c0734 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:20:02.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-6265" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":74,"skipped":1407,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods Extended
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:20:02.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-875" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":75,"skipped":1424,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:20:06.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-6013" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":68,"skipped":1296,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:20:02.434: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0777 on tmpfs
    Sep  2 12:20:02.553: INFO: Waiting up to 5m0s for pod "pod-63712707-08ff-4e40-a1ab-f9c74dbd0432" in namespace "emptydir-7414" to be "Succeeded or Failed"

    Sep  2 12:20:02.560: INFO: Pod "pod-63712707-08ff-4e40-a1ab-f9c74dbd0432": Phase="Pending", Reason="", readiness=false. Elapsed: 7.378982ms
    Sep  2 12:20:04.567: INFO: Pod "pod-63712707-08ff-4e40-a1ab-f9c74dbd0432": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013918955s
    Sep  2 12:20:06.574: INFO: Pod "pod-63712707-08ff-4e40-a1ab-f9c74dbd0432": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020999698s
    STEP: Saw pod success
    Sep  2 12:20:06.574: INFO: Pod "pod-63712707-08ff-4e40-a1ab-f9c74dbd0432" satisfied condition "Succeeded or Failed"

    Sep  2 12:20:06.578: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-j45dw pod pod-63712707-08ff-4e40-a1ab-f9c74dbd0432 container test-container: <nil>
    STEP: delete the pod
    Sep  2 12:20:06.604: INFO: Waiting for pod pod-63712707-08ff-4e40-a1ab-f9c74dbd0432 to disappear
    Sep  2 12:20:06.611: INFO: Pod pod-63712707-08ff-4e40-a1ab-f9c74dbd0432 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:20:06.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-7414" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":76,"skipped":1461,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:20:06.256: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-e8682795-8804-47ba-be30-2f3d67907157
    STEP: Creating a pod to test consume secrets
    Sep  2 12:20:06.347: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-082f206b-b19f-4951-8d30-79145cb33556" in namespace "projected-5149" to be "Succeeded or Failed"

    Sep  2 12:20:06.352: INFO: Pod "pod-projected-secrets-082f206b-b19f-4951-8d30-79145cb33556": Phase="Pending", Reason="", readiness=false. Elapsed: 4.905301ms
    Sep  2 12:20:08.361: INFO: Pod "pod-projected-secrets-082f206b-b19f-4951-8d30-79145cb33556": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014168681s
    Sep  2 12:20:10.368: INFO: Pod "pod-projected-secrets-082f206b-b19f-4951-8d30-79145cb33556": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021135685s
    STEP: Saw pod success
    Sep  2 12:20:10.368: INFO: Pod "pod-projected-secrets-082f206b-b19f-4951-8d30-79145cb33556" satisfied condition "Succeeded or Failed"

    Sep  2 12:20:10.376: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-worker-cznwre pod pod-projected-secrets-082f206b-b19f-4951-8d30-79145cb33556 container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep  2 12:20:10.425: INFO: Waiting for pod pod-projected-secrets-082f206b-b19f-4951-8d30-79145cb33556 to disappear
    Sep  2 12:20:10.429: INFO: Pod pod-projected-secrets-082f206b-b19f-4951-8d30-79145cb33556 no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:20:10.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-5149" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":69,"skipped":1317,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:20:10.534: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-map-ea6afb74-5b4e-4a8f-9d46-b26627f261ad
    STEP: Creating a pod to test consume secrets
    Sep  2 12:20:10.607: INFO: Waiting up to 5m0s for pod "pod-secrets-320b0210-51a3-47a2-b5d9-f4ea9de00d75" in namespace "secrets-4784" to be "Succeeded or Failed"

    Sep  2 12:20:10.613: INFO: Pod "pod-secrets-320b0210-51a3-47a2-b5d9-f4ea9de00d75": Phase="Pending", Reason="", readiness=false. Elapsed: 5.824118ms
    Sep  2 12:20:12.621: INFO: Pod "pod-secrets-320b0210-51a3-47a2-b5d9-f4ea9de00d75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013893527s
    Sep  2 12:20:14.627: INFO: Pod "pod-secrets-320b0210-51a3-47a2-b5d9-f4ea9de00d75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020153693s
    STEP: Saw pod success
    Sep  2 12:20:14.628: INFO: Pod "pod-secrets-320b0210-51a3-47a2-b5d9-f4ea9de00d75" satisfied condition "Succeeded or Failed"

    Sep  2 12:20:14.635: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-j45dw pod pod-secrets-320b0210-51a3-47a2-b5d9-f4ea9de00d75 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep  2 12:20:14.675: INFO: Waiting for pod pod-secrets-320b0210-51a3-47a2-b5d9-f4ea9de00d75 to disappear
    Sep  2 12:20:14.686: INFO: Pod pod-secrets-320b0210-51a3-47a2-b5d9-f4ea9de00d75 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:20:14.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-4784" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":70,"skipped":1340,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
    STEP: updating the pod
    Sep  2 12:20:17.392: INFO: Successfully updated pod "pod-update-activedeadlineseconds-4e27e89d-f20d-4442-a557-df9c5be9b00f"
    Sep  2 12:20:17.392: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-4e27e89d-f20d-4442-a557-df9c5be9b00f" in namespace "pods-3984" to be "terminated due to deadline exceeded"
    Sep  2 12:20:17.399: INFO: Pod "pod-update-activedeadlineseconds-4e27e89d-f20d-4442-a557-df9c5be9b00f": Phase="Running", Reason="", readiness=true. Elapsed: 7.030801ms
    Sep  2 12:20:19.409: INFO: Pod "pod-update-activedeadlineseconds-4e27e89d-f20d-4442-a557-df9c5be9b00f": Phase="Running", Reason="", readiness=true. Elapsed: 2.017581075s
    Sep  2 12:20:21.417: INFO: Pod "pod-update-activedeadlineseconds-4e27e89d-f20d-4442-a557-df9c5be9b00f": Phase="Running", Reason="", readiness=false. Elapsed: 4.025183828s
    Sep  2 12:20:23.424: INFO: Pod "pod-update-activedeadlineseconds-4e27e89d-f20d-4442-a557-df9c5be9b00f": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 6.032048133s

    Sep  2 12:20:23.424: INFO: Pod "pod-update-activedeadlineseconds-4e27e89d-f20d-4442-a557-df9c5be9b00f" satisfied condition "terminated due to deadline exceeded"
    [AfterEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:20:23.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-3984" for this suite.
    
... skipping 20 lines ...
    STEP: Registering the crd webhook via the AdmissionRegistration API
    Sep  2 12:19:43.477: INFO: Waiting for webhook configuration to be ready...
    Sep  2 12:19:53.604: INFO: Waiting for webhook configuration to be ready...
    Sep  2 12:20:03.713: INFO: Waiting for webhook configuration to be ready...
    Sep  2 12:20:13.807: INFO: Waiting for webhook configuration to be ready...
    Sep  2 12:20:23.825: INFO: Waiting for webhook configuration to be ready...
    Sep  2 12:20:23.826: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc000248290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 23 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should deny crd creation [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  2 12:20:23.826: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc000248290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
... skipping 16 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:20:28.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-2517" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":77,"skipped":1480,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSS
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":71,"skipped":1356,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:20:23.455: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context-test
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
    [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep  2 12:20:23.539: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-578aa832-04de-4dd7-b6ec-5f1be546f6a4" in namespace "security-context-test-2234" to be "Succeeded or Failed"

    Sep  2 12:20:23.552: INFO: Pod "alpine-nnp-false-578aa832-04de-4dd7-b6ec-5f1be546f6a4": Phase="Pending", Reason="", readiness=false. Elapsed: 13.076372ms
    Sep  2 12:20:25.569: INFO: Pod "alpine-nnp-false-578aa832-04de-4dd7-b6ec-5f1be546f6a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029832746s
    Sep  2 12:20:27.580: INFO: Pod "alpine-nnp-false-578aa832-04de-4dd7-b6ec-5f1be546f6a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041259146s
    Sep  2 12:20:29.589: INFO: Pod "alpine-nnp-false-578aa832-04de-4dd7-b6ec-5f1be546f6a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.050378201s
    Sep  2 12:20:29.589: INFO: Pod "alpine-nnp-false-578aa832-04de-4dd7-b6ec-5f1be546f6a4" satisfied condition "Succeeded or Failed"

    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:20:29.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-test-2234" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":72,"skipped":1356,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-network] IngressClass API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 22 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:20:29.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "ingressclass-795" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] IngressClass API  should support creating IngressClass API operations [Conformance]","total":-1,"completed":73,"skipped":1361,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 22 lines ...
    STEP: Destroying namespace "webhook-6422-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":78,"skipped":1483,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:20:29.885: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep  2 12:20:31.968: INFO: Deleting pod "var-expansion-a1dacdef-a444-4ed7-abfc-bf98e4e41a60" in namespace "var-expansion-8152"
    Sep  2 12:20:31.981: INFO: Wait up to 5m0s for pod "var-expansion-a1dacdef-a444-4ed7-abfc-bf98e4e41a60" to be fully deleted
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:20:33.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-8152" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":-1,"completed":74,"skipped":1370,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:20:32.963: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename containers
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test override command
    Sep  2 12:20:33.085: INFO: Waiting up to 5m0s for pod "client-containers-16b6cd71-6963-43f4-9882-d20ba41e3718" in namespace "containers-2959" to be "Succeeded or Failed"

    Sep  2 12:20:33.092: INFO: Pod "client-containers-16b6cd71-6963-43f4-9882-d20ba41e3718": Phase="Pending", Reason="", readiness=false. Elapsed: 7.12504ms
    Sep  2 12:20:35.106: INFO: Pod "client-containers-16b6cd71-6963-43f4-9882-d20ba41e3718": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021131953s
    Sep  2 12:20:37.117: INFO: Pod "client-containers-16b6cd71-6963-43f4-9882-d20ba41e3718": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031447995s
    STEP: Saw pod success
    Sep  2 12:20:37.117: INFO: Pod "client-containers-16b6cd71-6963-43f4-9882-d20ba41e3718" satisfied condition "Succeeded or Failed"

    Sep  2 12:20:37.123: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-j45dw pod client-containers-16b6cd71-6963-43f4-9882-d20ba41e3718 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  2 12:20:37.157: INFO: Waiting for pod client-containers-16b6cd71-6963-43f4-9882-d20ba41e3718 to disappear
    Sep  2 12:20:37.164: INFO: Pod client-containers-16b6cd71-6963-43f4-9882-d20ba41e3718 no longer exists
    [AfterEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:20:37.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "containers-2959" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":79,"skipped":1503,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:20:34.074: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-4f1764ec-d15c-422b-af9c-7ad1593be5b8
    STEP: Creating a pod to test consume configMaps
    Sep  2 12:20:34.202: INFO: Waiting up to 5m0s for pod "pod-configmaps-88f43b44-b0d3-4db5-a3d5-7fcd124213a3" in namespace "configmap-2121" to be "Succeeded or Failed"

    Sep  2 12:20:34.220: INFO: Pod "pod-configmaps-88f43b44-b0d3-4db5-a3d5-7fcd124213a3": Phase="Pending", Reason="", readiness=false. Elapsed: 17.555449ms
    Sep  2 12:20:36.228: INFO: Pod "pod-configmaps-88f43b44-b0d3-4db5-a3d5-7fcd124213a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026136942s
    Sep  2 12:20:38.234: INFO: Pod "pod-configmaps-88f43b44-b0d3-4db5-a3d5-7fcd124213a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03241928s
    STEP: Saw pod success
    Sep  2 12:20:38.235: INFO: Pod "pod-configmaps-88f43b44-b0d3-4db5-a3d5-7fcd124213a3" satisfied condition "Succeeded or Failed"

    Sep  2 12:20:38.241: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-worker-cznwre pod pod-configmaps-88f43b44-b0d3-4db5-a3d5-7fcd124213a3 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  2 12:20:38.299: INFO: Waiting for pod pod-configmaps-88f43b44-b0d3-4db5-a3d5-7fcd124213a3 to disappear
    Sep  2 12:20:38.305: INFO: Pod pod-configmaps-88f43b44-b0d3-4db5-a3d5-7fcd124213a3 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:20:38.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-2121" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":75,"skipped":1375,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:20:37.393: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-207b6b84-bd97-4373-84da-0aba62c196a1
    STEP: Creating a pod to test consume secrets
    Sep  2 12:20:37.490: INFO: Waiting up to 5m0s for pod "pod-secrets-44383c7d-485e-4027-923b-5ff845939d41" in namespace "secrets-4509" to be "Succeeded or Failed"

    Sep  2 12:20:37.498: INFO: Pod "pod-secrets-44383c7d-485e-4027-923b-5ff845939d41": Phase="Pending", Reason="", readiness=false. Elapsed: 7.749387ms
    Sep  2 12:20:39.505: INFO: Pod "pod-secrets-44383c7d-485e-4027-923b-5ff845939d41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014590454s
    Sep  2 12:20:41.513: INFO: Pod "pod-secrets-44383c7d-485e-4027-923b-5ff845939d41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022606428s
    STEP: Saw pod success
    Sep  2 12:20:41.513: INFO: Pod "pod-secrets-44383c7d-485e-4027-923b-5ff845939d41" satisfied condition "Succeeded or Failed"

    Sep  2 12:20:41.519: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-j45dw pod pod-secrets-44383c7d-485e-4027-923b-5ff845939d41 container secret-env-test: <nil>
    STEP: delete the pod
    Sep  2 12:20:41.557: INFO: Waiting for pod pod-secrets-44383c7d-485e-4027-923b-5ff845939d41 to disappear
    Sep  2 12:20:41.561: INFO: Pod pod-secrets-44383c7d-485e-4027-923b-5ff845939d41 no longer exists
    [AfterEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:20:41.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-4509" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":80,"skipped":1569,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] DisruptionController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:20:42.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "disruption-1045" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":76,"skipped":1380,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:20:41.637: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0777 on tmpfs
    Sep  2 12:20:41.729: INFO: Waiting up to 5m0s for pod "pod-0da832b4-7e16-4009-972e-b34a7260f7e2" in namespace "emptydir-4188" to be "Succeeded or Failed"

    Sep  2 12:20:41.736: INFO: Pod "pod-0da832b4-7e16-4009-972e-b34a7260f7e2": Phase="Pending", Reason="", readiness=false. Elapsed: 7.867361ms
    Sep  2 12:20:43.746: INFO: Pod "pod-0da832b4-7e16-4009-972e-b34a7260f7e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017192299s
    Sep  2 12:20:45.754: INFO: Pod "pod-0da832b4-7e16-4009-972e-b34a7260f7e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025685067s
    STEP: Saw pod success
    Sep  2 12:20:45.754: INFO: Pod "pod-0da832b4-7e16-4009-972e-b34a7260f7e2" satisfied condition "Succeeded or Failed"

    Sep  2 12:20:45.757: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-worker-cznwre pod pod-0da832b4-7e16-4009-972e-b34a7260f7e2 container test-container: <nil>
    STEP: delete the pod
    Sep  2 12:20:45.781: INFO: Waiting for pod pod-0da832b4-7e16-4009-972e-b34a7260f7e2 to disappear
    Sep  2 12:20:45.785: INFO: Pod pod-0da832b4-7e16-4009-972e-b34a7260f7e2 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:20:45.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-4188" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":81,"skipped":1588,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:20:42.701: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-9a45c892-ec6b-48c0-bdf5-039d3d3f1715
    STEP: Creating a pod to test consume configMaps
    Sep  2 12:20:42.806: INFO: Waiting up to 5m0s for pod "pod-configmaps-fb0a46a3-2e91-4b6b-b42b-a888685b6efd" in namespace "configmap-3208" to be "Succeeded or Failed"

    Sep  2 12:20:42.815: INFO: Pod "pod-configmaps-fb0a46a3-2e91-4b6b-b42b-a888685b6efd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.921201ms
    Sep  2 12:20:44.823: INFO: Pod "pod-configmaps-fb0a46a3-2e91-4b6b-b42b-a888685b6efd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016702056s
    Sep  2 12:20:46.831: INFO: Pod "pod-configmaps-fb0a46a3-2e91-4b6b-b42b-a888685b6efd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025153493s
    Sep  2 12:20:48.837: INFO: Pod "pod-configmaps-fb0a46a3-2e91-4b6b-b42b-a888685b6efd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030571025s
    STEP: Saw pod success
    Sep  2 12:20:48.837: INFO: Pod "pod-configmaps-fb0a46a3-2e91-4b6b-b42b-a888685b6efd" satisfied condition "Succeeded or Failed"

    Sep  2 12:20:48.840: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-worker-k0ejuu pod pod-configmaps-fb0a46a3-2e91-4b6b-b42b-a888685b6efd container agnhost-container: <nil>
    STEP: delete the pod
    Sep  2 12:20:48.866: INFO: Waiting for pod pod-configmaps-fb0a46a3-2e91-4b6b-b42b-a888685b6efd to disappear
    Sep  2 12:20:48.871: INFO: Pod pod-configmaps-fb0a46a3-2e91-4b6b-b42b-a888685b6efd no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:20:48.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-3208" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":77,"skipped":1426,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:20:48.897: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-map-96cdafaf-a84f-4bb6-970d-689f3e9dac13
    STEP: Creating a pod to test consume secrets
    Sep  2 12:20:48.976: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d737b9cf-60e9-40a7-9a8b-23342482d76f" in namespace "projected-176" to be "Succeeded or Failed"

    Sep  2 12:20:48.981: INFO: Pod "pod-projected-secrets-d737b9cf-60e9-40a7-9a8b-23342482d76f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.129778ms
    Sep  2 12:20:50.989: INFO: Pod "pod-projected-secrets-d737b9cf-60e9-40a7-9a8b-23342482d76f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013515456s
    Sep  2 12:20:52.996: INFO: Pod "pod-projected-secrets-d737b9cf-60e9-40a7-9a8b-23342482d76f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019980791s
    STEP: Saw pod success
    Sep  2 12:20:52.996: INFO: Pod "pod-projected-secrets-d737b9cf-60e9-40a7-9a8b-23342482d76f" satisfied condition "Succeeded or Failed"

    Sep  2 12:20:53.002: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-j45dw pod pod-projected-secrets-d737b9cf-60e9-40a7-9a8b-23342482d76f container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep  2 12:20:53.028: INFO: Waiting for pod pod-projected-secrets-d737b9cf-60e9-40a7-9a8b-23342482d76f to disappear
    Sep  2 12:20:53.037: INFO: Pod pod-projected-secrets-d737b9cf-60e9-40a7-9a8b-23342482d76f no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:20:53.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-176" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":78,"skipped":1427,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:20:53.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-2759" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info  [Conformance]","total":-1,"completed":79,"skipped":1430,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Job
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 22 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:21:00.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "job-7283" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":80,"skipped":1455,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    S
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":82,"skipped":1678,"failed":7,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:20:23.945: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
    STEP: Registering the crd webhook via the AdmissionRegistration API
    Sep  2 12:20:37.711: INFO: Waiting for webhook configuration to be ready...
    Sep  2 12:20:47.842: INFO: Waiting for webhook configuration to be ready...
    Sep  2 12:20:57.941: INFO: Waiting for webhook configuration to be ready...
    Sep  2 12:21:08.033: INFO: Waiting for webhook configuration to be ready...
    Sep  2 12:21:18.053: INFO: Waiting for webhook configuration to be ready...
    Sep  2 12:21:18.053: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc000248290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 23 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should deny crd creation [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  2 12:21:18.053: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc000248290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
... skipping 97 lines ...
    STEP: Destroying namespace "services-9362" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":81,"skipped":1456,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-apps] CronJob
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:22:01.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "cronjob-3940" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":-1,"completed":82,"skipped":1612,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 52 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:22:05.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-9576" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":-1,"completed":82,"skipped":1460,"failed":2,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:22:02.008: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-5b70dcf0-8421-43e8-a41f-5e33904015f0
    STEP: Creating a pod to test consume configMaps
    Sep  2 12:22:02.083: INFO: Waiting up to 5m0s for pod "pod-configmaps-83993752-e5a2-4e58-aa27-132561fa0706" in namespace "configmap-6914" to be "Succeeded or Failed"

    Sep  2 12:22:02.089: INFO: Pod "pod-configmaps-83993752-e5a2-4e58-aa27-132561fa0706": Phase="Pending", Reason="", readiness=false. Elapsed: 5.133467ms
    Sep  2 12:22:04.100: INFO: Pod "pod-configmaps-83993752-e5a2-4e58-aa27-132561fa0706": Phase="Running", Reason="", readiness=true. Elapsed: 2.016097546s
    Sep  2 12:22:06.106: INFO: Pod "pod-configmaps-83993752-e5a2-4e58-aa27-132561fa0706": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02278228s
    STEP: Saw pod success
    Sep  2 12:22:06.106: INFO: Pod "pod-configmaps-83993752-e5a2-4e58-aa27-132561fa0706" satisfied condition "Succeeded or Failed"

    Sep  2 12:22:06.114: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-worker-k0ejuu pod pod-configmaps-83993752-e5a2-4e58-aa27-132561fa0706 container configmap-volume-test: <nil>
    STEP: delete the pod
    Sep  2 12:22:06.166: INFO: Waiting for pod pod-configmaps-83993752-e5a2-4e58-aa27-132561fa0706 to disappear
    Sep  2 12:22:06.173: INFO: Pod pod-configmaps-83993752-e5a2-4e58-aa27-132561fa0706 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:22:06.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-6914" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":83,"skipped":1626,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  2 12:22:06.329: INFO: Waiting up to 5m0s for pod "downwardapi-volume-abeb6c09-1387-475b-b731-151b709add3e" in namespace "projected-1180" to be "Succeeded or Failed"

    Sep  2 12:22:06.341: INFO: Pod "downwardapi-volume-abeb6c09-1387-475b-b731-151b709add3e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.875378ms
    Sep  2 12:22:08.350: INFO: Pod "downwardapi-volume-abeb6c09-1387-475b-b731-151b709add3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020583361s
    Sep  2 12:22:10.357: INFO: Pod "downwardapi-volume-abeb6c09-1387-475b-b731-151b709add3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028001271s
    STEP: Saw pod success
    Sep  2 12:22:10.357: INFO: Pod "downwardapi-volume-abeb6c09-1387-475b-b731-151b709add3e" satisfied condition "Succeeded or Failed"

    Sep  2 12:22:10.363: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-7zrh5 pod downwardapi-volume-abeb6c09-1387-475b-b731-151b709add3e container client-container: <nil>
    STEP: delete the pod
    Sep  2 12:22:10.392: INFO: Waiting for pod downwardapi-volume-abeb6c09-1387-475b-b731-151b709add3e to disappear
    Sep  2 12:22:10.397: INFO: Pod downwardapi-volume-abeb6c09-1387-475b-b731-151b709add3e no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:22:10.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-1180" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":84,"skipped":1631,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSSSSS
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":82,"skipped":1678,"failed":8,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:21:18.175: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
    STEP: Registering the crd webhook via the AdmissionRegistration API
    Sep  2 12:21:32.553: INFO: Waiting for webhook configuration to be ready...
    Sep  2 12:21:42.670: INFO: Waiting for webhook configuration to be ready...
    Sep  2 12:21:52.780: INFO: Waiting for webhook configuration to be ready...
    Sep  2 12:22:02.884: INFO: Waiting for webhook configuration to be ready...
    Sep  2 12:22:12.910: INFO: Waiting for webhook configuration to be ready...
    Sep  2 12:22:12.910: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc000248290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 23 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should deny crd creation [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  2 12:22:12.910: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc000248290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:2059
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":82,"skipped":1678,"failed":9,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  2 12:22:13.387: INFO: Waiting up to 5m0s for pod "downwardapi-volume-81ea7940-9727-42a5-b055-9c4cb025353f" in namespace "downward-api-5852" to be "Succeeded or Failed"

    Sep  2 12:22:13.399: INFO: Pod "downwardapi-volume-81ea7940-9727-42a5-b055-9c4cb025353f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.156628ms
    Sep  2 12:22:15.406: INFO: Pod "downwardapi-volume-81ea7940-9727-42a5-b055-9c4cb025353f": Phase="Running", Reason="", readiness=true. Elapsed: 2.01887276s
    Sep  2 12:22:17.415: INFO: Pod "downwardapi-volume-81ea7940-9727-42a5-b055-9c4cb025353f": Phase="Running", Reason="", readiness=false. Elapsed: 4.028181123s
    Sep  2 12:22:19.423: INFO: Pod "downwardapi-volume-81ea7940-9727-42a5-b055-9c4cb025353f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035791812s
    STEP: Saw pod success
    Sep  2 12:22:19.423: INFO: Pod "downwardapi-volume-81ea7940-9727-42a5-b055-9c4cb025353f" satisfied condition "Succeeded or Failed"

    Sep  2 12:22:19.429: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-worker-cznwre pod downwardapi-volume-81ea7940-9727-42a5-b055-9c4cb025353f container client-container: <nil>
    STEP: delete the pod
    Sep  2 12:22:19.469: INFO: Waiting for pod downwardapi-volume-81ea7940-9727-42a5-b055-9c4cb025353f to disappear
    Sep  2 12:22:19.473: INFO: Pod downwardapi-volume-81ea7940-9727-42a5-b055-9c4cb025353f no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:22:19.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-5852" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":83,"skipped":1692,"failed":9,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:22:21.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-9281" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":84,"skipped":1737,"failed":9,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 28 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:22:29.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-9338" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":85,"skipped":1763,"failed":9,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:22:29.240: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0644 on tmpfs
    Sep  2 12:22:29.302: INFO: Waiting up to 5m0s for pod "pod-d29fad7a-0f4f-468c-9807-e866e0caa506" in namespace "emptydir-5223" to be "Succeeded or Failed"

    Sep  2 12:22:29.310: INFO: Pod "pod-d29fad7a-0f4f-468c-9807-e866e0caa506": Phase="Pending", Reason="", readiness=false. Elapsed: 7.517271ms
    Sep  2 12:22:31.315: INFO: Pod "pod-d29fad7a-0f4f-468c-9807-e866e0caa506": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011987864s
    Sep  2 12:22:33.320: INFO: Pod "pod-d29fad7a-0f4f-468c-9807-e866e0caa506": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017512786s
    STEP: Saw pod success
    Sep  2 12:22:33.320: INFO: Pod "pod-d29fad7a-0f4f-468c-9807-e866e0caa506" satisfied condition "Succeeded or Failed"

    Sep  2 12:22:33.324: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-worker-k0ejuu pod pod-d29fad7a-0f4f-468c-9807-e866e0caa506 container test-container: <nil>
    STEP: delete the pod
    Sep  2 12:22:33.341: INFO: Waiting for pod pod-d29fad7a-0f4f-468c-9807-e866e0caa506 to disappear
    Sep  2 12:22:33.344: INFO: Pod pod-d29fad7a-0f4f-468c-9807-e866e0caa506 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:22:33.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-5223" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":86,"skipped":1835,"failed":9,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:22:35.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-1516" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":87,"skipped":1854,"failed":9,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  2 12:22:35.492: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
    Sep  2 12:22:35.542: INFO: Waiting up to 5m0s for pod "security-context-5484d9bd-2db1-4c6e-aedf-df3c2fa55adf" in namespace "security-context-7749" to be "Succeeded or Failed"

    Sep  2 12:22:35.548: INFO: Pod "security-context-5484d9bd-2db1-4c6e-aedf-df3c2fa55adf": Phase="Pending", Reason="", readiness=false. Elapsed: 5.901279ms
    Sep  2 12:22:37.552: INFO: Pod "security-context-5484d9bd-2db1-4c6e-aedf-df3c2fa55adf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010647634s
    Sep  2 12:22:39.557: INFO: Pod "security-context-5484d9bd-2db1-4c6e-aedf-df3c2fa55adf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015312157s
    STEP: Saw pod success
    Sep  2 12:22:39.557: INFO: Pod "security-context-5484d9bd-2db1-4c6e-aedf-df3c2fa55adf" satisfied condition "Succeeded or Failed"

    Sep  2 12:22:39.561: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-7zrh5 pod security-context-5484d9bd-2db1-4c6e-aedf-df3c2fa55adf container test-container: <nil>
    STEP: delete the pod
    Sep  2 12:22:39.578: INFO: Waiting for pod security-context-5484d9bd-2db1-4c6e-aedf-df3c2fa55adf to disappear
    Sep  2 12:22:39.581: INFO: Pod security-context-5484d9bd-2db1-4c6e-aedf-df3c2fa55adf no longer exists
    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:22:39.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-7749" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":88,"skipped":1864,"failed":9,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with secret pod [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-secret-v5sz
    STEP: Creating a pod to test atomic-volume-subpath
    Sep  2 12:22:39.655: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-v5sz" in namespace "subpath-6443" to be "Succeeded or Failed"

    Sep  2 12:22:39.661: INFO: Pod "pod-subpath-test-secret-v5sz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.350203ms
    Sep  2 12:22:41.667: INFO: Pod "pod-subpath-test-secret-v5sz": Phase="Running", Reason="", readiness=true. Elapsed: 2.011969597s
    Sep  2 12:22:43.672: INFO: Pod "pod-subpath-test-secret-v5sz": Phase="Running", Reason="", readiness=true. Elapsed: 4.017663057s
    Sep  2 12:22:45.678: INFO: Pod "pod-subpath-test-secret-v5sz": Phase="Running", Reason="", readiness=true. Elapsed: 6.022895966s
    Sep  2 12:22:47.683: INFO: Pod "pod-subpath-test-secret-v5sz": Phase="Running", Reason="", readiness=true. Elapsed: 8.028020383s
    Sep  2 12:22:49.688: INFO: Pod "pod-subpath-test-secret-v5sz": Phase="Running", Reason="", readiness=true. Elapsed: 10.033085652s
... skipping 2 lines ...
    Sep  2 12:22:55.704: INFO: Pod "pod-subpath-test-secret-v5sz": Phase="Running", Reason="", readiness=true. Elapsed: 16.048889513s
    Sep  2 12:22:57.708: INFO: Pod "pod-subpath-test-secret-v5sz": Phase="Running", Reason="", readiness=true. Elapsed: 18.052877612s
    Sep  2 12:22:59.714: INFO: Pod "pod-subpath-test-secret-v5sz": Phase="Running", Reason="", readiness=true. Elapsed: 20.059523659s
    Sep  2 12:23:01.719: INFO: Pod "pod-subpath-test-secret-v5sz": Phase="Running", Reason="", readiness=false. Elapsed: 22.064225497s
    Sep  2 12:23:03.726: INFO: Pod "pod-subpath-test-secret-v5sz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.070852191s
    STEP: Saw pod success
    Sep  2 12:23:03.726: INFO: Pod "pod-subpath-test-secret-v5sz" satisfied condition "Succeeded or Failed"

    Sep  2 12:23:03.733: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-7zrh5 pod pod-subpath-test-secret-v5sz container test-container-subpath-secret-v5sz: <nil>
    STEP: delete the pod
    Sep  2 12:23:03.767: INFO: Waiting for pod pod-subpath-test-secret-v5sz to disappear
    Sep  2 12:23:03.773: INFO: Pod pod-subpath-test-secret-v5sz no longer exists
    STEP: Deleting pod pod-subpath-test-secret-v5sz
    Sep  2 12:23:03.773: INFO: Deleting pod "pod-subpath-test-secret-v5sz" in namespace "subpath-6443"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:23:03.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-6443" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":89,"skipped":1865,"failed":9,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  2 12:23:03.853: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3ed68c27-7254-4b14-bcbb-6fc34ee4a57b" in namespace "projected-89" to be "Succeeded or Failed"

    Sep  2 12:23:03.856: INFO: Pod "downwardapi-volume-3ed68c27-7254-4b14-bcbb-6fc34ee4a57b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.684ms
    Sep  2 12:23:05.860: INFO: Pod "downwardapi-volume-3ed68c27-7254-4b14-bcbb-6fc34ee4a57b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007182534s
    Sep  2 12:23:07.864: INFO: Pod "downwardapi-volume-3ed68c27-7254-4b14-bcbb-6fc34ee4a57b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011608416s
    STEP: Saw pod success
    Sep  2 12:23:07.864: INFO: Pod "downwardapi-volume-3ed68c27-7254-4b14-bcbb-6fc34ee4a57b" satisfied condition "Succeeded or Failed"

    Sep  2 12:23:07.869: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rxa2hz-md-0-n7tm6-959f5f457-7zrh5 pod downwardapi-volume-3ed68c27-7254-4b14-bcbb-6fc34ee4a57b container client-container: <nil>
    STEP: delete the pod
    Sep  2 12:23:07.888: INFO: Waiting for pod downwardapi-volume-3ed68c27-7254-4b14-bcbb-6fc34ee4a57b to disappear
    Sep  2 12:23:07.891: INFO: Pod downwardapi-volume-3ed68c27-7254-4b14-bcbb-6fc34ee4a57b no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:23:07.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-89" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":90,"skipped":1869,"failed":9,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 63 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:23:10.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-9012" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]","total":-1,"completed":91,"skipped":1874,"failed":9,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:23:30.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-2159" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]","total":-1,"completed":92,"skipped":1898,"failed":9,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 51 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  2 12:23:42.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-4212" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":85,"skipped":1637,"failed":5,"failures":["[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
    STEP: Looking for a node to schedule stateful set and pod
    STEP: Creating pod with conflicting port in namespace statefulset-9600
    STEP: Waiting until pod test-pod will start running in namespace statefulset-9600
    STEP: Creating statefulset with conflicting port in namespace statefulset-9600
    STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9600
    Sep  2 12:23:44.190: INFO: Observed stateful pod in namespace: statefulset-9600, name: ss-0, uid: bb945416-dc68-49d2-a355-27d2ab13bc18, status phase: Pending. Waiting for statefulset controller to delete.
    Sep  2 12:23:44.202: INFO: Observed stateful pod in namespace: statefulset-9600, name: ss-0, uid: bb945416-dc68-49d2-a355-27d2ab13