This job view page is being replaced by Spyglass soon. Check out the new job view.
PRoscr: ⚠️ Use Kubernetes 1.25 in Quick Start docs and CAPD.
Resultfailure
Tests 0 failed / 7 succeeded
Started2022-09-05 15:24
Elapsed1h9m
Revision
Refs 7156
uploadercrier
uploadercrier

No Test Failures!


Show 7 Passed Tests

Show 20 Skipped Tests

Error lines from build-log.txt

... skipping 899 lines ...
Status: Downloaded newer image for quay.io/jetstack/cert-manager-controller:v1.9.1
quay.io/jetstack/cert-manager-controller:v1.9.1
+ export GINKGO_NODES=3
+ GINKGO_NODES=3
+ export GINKGO_NOCOLOR=true
+ GINKGO_NOCOLOR=true
+ export GINKGO_ARGS=--fail-fast
+ GINKGO_ARGS=--fail-fast
+ export E2E_CONF_FILE=/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker.yaml
+ E2E_CONF_FILE=/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker.yaml
+ export ARTIFACTS=/logs/artifacts
+ ARTIFACTS=/logs/artifacts
+ export SKIP_RESOURCE_CLEANUP=false
+ SKIP_RESOURCE_CLEANUP=false
... skipping 79 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-kcp-scale-in --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-kcp-scale-in.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ipv6 --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ipv6.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-topology --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-topology.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ignition --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ignition.yaml
mkdir -p /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/test-extension
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/extension/config/default > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/test-extension/deployment.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/ginkgo-v2.1.4 -v --trace --tags=e2e --focus="\[K8s-Upgrade\]"  --nodes=3 --no-color=true --output-dir="/logs/artifacts" --junit-report="junit.e2e_suite.1.xml" --fail-fast . -- \
    -e2e.artifacts-folder="/logs/artifacts" \
    -e2e.config="/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker.yaml" \
    -e2e.skip-resource-cleanup=false -e2e.use-existing-cluster=false
go: downloading github.com/blang/semver v3.5.1+incompatible
go: downloading github.com/onsi/gomega v1.20.0
go: downloading k8s.io/apimachinery v0.24.2
... skipping 229 lines ...
    kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-rbkcco-mp-0-config created
    kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-rbkcco-mp-0-config-cgroupfs created
    cluster.cluster.x-k8s.io/k8s-upgrade-and-conformance-rbkcco created
    machinepool.cluster.x-k8s.io/k8s-upgrade-and-conformance-rbkcco-mp-0 created
    dockermachinepool.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-rbkcco-dmp-0 created

    Failed to get logs for Machine k8s-upgrade-and-conformance-rbkcco-6qgfw-5qghq, Cluster k8s-upgrade-and-conformance-x40jpj/k8s-upgrade-and-conformance-rbkcco: exit status 2
    Failed to get logs for Machine k8s-upgrade-and-conformance-rbkcco-md-0-85nn6-54859d8bd4-grv26, Cluster k8s-upgrade-and-conformance-x40jpj/k8s-upgrade-and-conformance-rbkcco: exit status 2
    Failed to get logs for Machine k8s-upgrade-and-conformance-rbkcco-md-0-85nn6-54859d8bd4-vcz8p, Cluster k8s-upgrade-and-conformance-x40jpj/k8s-upgrade-and-conformance-rbkcco: exit status 2
    Failed to get logs for MachinePool k8s-upgrade-and-conformance-rbkcco-mp-0, Cluster k8s-upgrade-and-conformance-x40jpj/k8s-upgrade-and-conformance-rbkcco: exit status 2
  << End Captured StdOut/StdErr Output

  Begin Captured GinkgoWriter Output >>
    STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec 09/05/22 15:35:09.74
    INFO: Creating namespace k8s-upgrade-and-conformance-x40jpj
    INFO: Creating event watcher for namespace "k8s-upgrade-and-conformance-x40jpj"
... skipping 41 lines ...
    
    Running in parallel across 4 nodes
    
    Sep  5 15:43:41.342: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  5 15:43:41.349: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
    Sep  5 15:43:41.375: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
    Sep  5 15:43:41.440: INFO: The status of Pod coredns-78fcd69978-8sbft is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:41.440: INFO: The status of Pod kindnet-q7r82 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:41.440: INFO: The status of Pod kindnet-w9k9g is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:41.440: INFO: The status of Pod kube-proxy-8jjqq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:41.441: INFO: The status of Pod kube-proxy-xj6mc is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:41.441: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
    Sep  5 15:43:41.441: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep  5 15:43:41.441: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep  5 15:43:41.441: INFO: coredns-78fcd69978-8sbft  k8s-upgrade-and-conformance-rbkcco-worker-fl8tbb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:30 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:27 +0000 UTC  }]
    Sep  5 15:43:41.441: INFO: kindnet-q7r82             k8s-upgrade-and-conformance-rbkcco-worker-fl8tbb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:36:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:36:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:36:46 +0000 UTC  }]
    Sep  5 15:43:41.441: INFO: kindnet-w9k9g             k8s-upgrade-and-conformance-rbkcco-worker-k01c4f  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:37:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:37:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:37:01 +0000 UTC  }]
    Sep  5 15:43:41.441: INFO: kube-proxy-8jjqq          k8s-upgrade-and-conformance-rbkcco-worker-k01c4f  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:59 +0000 UTC  }]
    Sep  5 15:43:41.441: INFO: kube-proxy-xj6mc          k8s-upgrade-and-conformance-rbkcco-worker-fl8tbb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:27 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:23 +0000 UTC  }]
    Sep  5 15:43:41.441: INFO: 
    Sep  5 15:43:43.487: INFO: The status of Pod coredns-78fcd69978-8sbft is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:43.487: INFO: The status of Pod kindnet-q7r82 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:43.487: INFO: The status of Pod kindnet-w9k9g is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:43.487: INFO: The status of Pod kube-proxy-8jjqq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:43.487: INFO: The status of Pod kube-proxy-xj6mc is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:43.487: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (2 seconds elapsed)
    Sep  5 15:43:43.487: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep  5 15:43:43.487: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep  5 15:43:43.487: INFO: coredns-78fcd69978-8sbft  k8s-upgrade-and-conformance-rbkcco-worker-fl8tbb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:30 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:27 +0000 UTC  }]
    Sep  5 15:43:43.487: INFO: kindnet-q7r82             k8s-upgrade-and-conformance-rbkcco-worker-fl8tbb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:36:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:36:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:36:46 +0000 UTC  }]
    Sep  5 15:43:43.487: INFO: kindnet-w9k9g             k8s-upgrade-and-conformance-rbkcco-worker-k01c4f  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:37:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:37:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:37:01 +0000 UTC  }]
    Sep  5 15:43:43.487: INFO: kube-proxy-8jjqq          k8s-upgrade-and-conformance-rbkcco-worker-k01c4f  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:59 +0000 UTC  }]
    Sep  5 15:43:43.487: INFO: kube-proxy-xj6mc          k8s-upgrade-and-conformance-rbkcco-worker-fl8tbb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:27 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:23 +0000 UTC  }]
    Sep  5 15:43:43.487: INFO: 
    Sep  5 15:43:45.487: INFO: The status of Pod coredns-78fcd69978-8sbft is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:45.487: INFO: The status of Pod kindnet-q7r82 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:45.487: INFO: The status of Pod kindnet-w9k9g is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:45.487: INFO: The status of Pod kube-proxy-8jjqq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:45.487: INFO: The status of Pod kube-proxy-xj6mc is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:45.487: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (4 seconds elapsed)
    Sep  5 15:43:45.487: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep  5 15:43:45.487: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep  5 15:43:45.487: INFO: coredns-78fcd69978-8sbft  k8s-upgrade-and-conformance-rbkcco-worker-fl8tbb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:30 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:27 +0000 UTC  }]
    Sep  5 15:43:45.487: INFO: kindnet-q7r82             k8s-upgrade-and-conformance-rbkcco-worker-fl8tbb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:36:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:36:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:36:46 +0000 UTC  }]
    Sep  5 15:43:45.487: INFO: kindnet-w9k9g             k8s-upgrade-and-conformance-rbkcco-worker-k01c4f  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:37:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:37:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:37:01 +0000 UTC  }]
    Sep  5 15:43:45.488: INFO: kube-proxy-8jjqq          k8s-upgrade-and-conformance-rbkcco-worker-k01c4f  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:59 +0000 UTC  }]
    Sep  5 15:43:45.488: INFO: kube-proxy-xj6mc          k8s-upgrade-and-conformance-rbkcco-worker-fl8tbb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:27 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:23 +0000 UTC  }]
    Sep  5 15:43:45.488: INFO: 
    Sep  5 15:43:47.487: INFO: The status of Pod coredns-78fcd69978-8sbft is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:47.488: INFO: The status of Pod kindnet-q7r82 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:47.488: INFO: The status of Pod kindnet-w9k9g is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:47.488: INFO: The status of Pod kube-proxy-8jjqq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:47.488: INFO: The status of Pod kube-proxy-xj6mc is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:47.488: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (6 seconds elapsed)
    Sep  5 15:43:47.488: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep  5 15:43:47.488: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep  5 15:43:47.488: INFO: coredns-78fcd69978-8sbft  k8s-upgrade-and-conformance-rbkcco-worker-fl8tbb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:30 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:27 +0000 UTC  }]
    Sep  5 15:43:47.488: INFO: kindnet-q7r82             k8s-upgrade-and-conformance-rbkcco-worker-fl8tbb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:36:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:36:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:36:46 +0000 UTC  }]
    Sep  5 15:43:47.488: INFO: kindnet-w9k9g             k8s-upgrade-and-conformance-rbkcco-worker-k01c4f  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:37:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:37:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:37:01 +0000 UTC  }]
    Sep  5 15:43:47.488: INFO: kube-proxy-8jjqq          k8s-upgrade-and-conformance-rbkcco-worker-k01c4f  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:59 +0000 UTC  }]
    Sep  5 15:43:47.488: INFO: kube-proxy-xj6mc          k8s-upgrade-and-conformance-rbkcco-worker-fl8tbb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:27 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:23 +0000 UTC  }]
    Sep  5 15:43:47.488: INFO: 
    Sep  5 15:43:49.498: INFO: The status of Pod coredns-78fcd69978-8sbft is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:49.498: INFO: The status of Pod kindnet-q7r82 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:49.498: INFO: The status of Pod kindnet-w9k9g is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:49.499: INFO: The status of Pod kube-proxy-8jjqq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:49.499: INFO: The status of Pod kube-proxy-xj6mc is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:49.499: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (8 seconds elapsed)
    Sep  5 15:43:49.499: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep  5 15:43:49.499: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep  5 15:43:49.499: INFO: coredns-78fcd69978-8sbft  k8s-upgrade-and-conformance-rbkcco-worker-fl8tbb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:30 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:27 +0000 UTC  }]
    Sep  5 15:43:49.499: INFO: kindnet-q7r82             k8s-upgrade-and-conformance-rbkcco-worker-fl8tbb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:36:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:36:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:36:46 +0000 UTC  }]
    Sep  5 15:43:49.499: INFO: kindnet-w9k9g             k8s-upgrade-and-conformance-rbkcco-worker-k01c4f  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:37:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:37:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:37:01 +0000 UTC  }]
    Sep  5 15:43:49.500: INFO: kube-proxy-8jjqq          k8s-upgrade-and-conformance-rbkcco-worker-k01c4f  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:59 +0000 UTC  }]
    Sep  5 15:43:49.500: INFO: kube-proxy-xj6mc          k8s-upgrade-and-conformance-rbkcco-worker-fl8tbb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:27 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:23 +0000 UTC  }]
    Sep  5 15:43:49.500: INFO: 
    Sep  5 15:43:51.495: INFO: The status of Pod coredns-78fcd69978-8sbft is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:51.495: INFO: The status of Pod kindnet-q7r82 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:51.495: INFO: The status of Pod kindnet-w9k9g is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:51.495: INFO: The status of Pod kube-proxy-8jjqq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:51.495: INFO: The status of Pod kube-proxy-xj6mc is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:51.495: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (10 seconds elapsed)
    Sep  5 15:43:51.495: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep  5 15:43:51.495: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep  5 15:43:51.495: INFO: coredns-78fcd69978-8sbft  k8s-upgrade-and-conformance-rbkcco-worker-fl8tbb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:30 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:27 +0000 UTC  }]
    Sep  5 15:43:51.495: INFO: kindnet-q7r82             k8s-upgrade-and-conformance-rbkcco-worker-fl8tbb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:36:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:36:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:36:46 +0000 UTC  }]
    Sep  5 15:43:51.495: INFO: kindnet-w9k9g             k8s-upgrade-and-conformance-rbkcco-worker-k01c4f  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:37:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:37:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:37:01 +0000 UTC  }]
    Sep  5 15:43:51.495: INFO: kube-proxy-8jjqq          k8s-upgrade-and-conformance-rbkcco-worker-k01c4f  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:59 +0000 UTC  }]
    Sep  5 15:43:51.495: INFO: kube-proxy-xj6mc          k8s-upgrade-and-conformance-rbkcco-worker-fl8tbb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:27 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:23 +0000 UTC  }]
    Sep  5 15:43:51.495: INFO: 
    Sep  5 15:43:53.479: INFO: The status of Pod coredns-78fcd69978-8sbft is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:53.479: INFO: The status of Pod kindnet-q7r82 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:53.479: INFO: The status of Pod kindnet-w9k9g is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:53.479: INFO: The status of Pod kube-proxy-8jjqq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:53.479: INFO: The status of Pod kube-proxy-xj6mc is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:53.479: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (12 seconds elapsed)
    Sep  5 15:43:53.479: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep  5 15:43:53.479: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep  5 15:43:53.479: INFO: coredns-78fcd69978-8sbft  k8s-upgrade-and-conformance-rbkcco-worker-fl8tbb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:30 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:27 +0000 UTC  }]
    Sep  5 15:43:53.479: INFO: kindnet-q7r82             k8s-upgrade-and-conformance-rbkcco-worker-fl8tbb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:36:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:36:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:36:46 +0000 UTC  }]
    Sep  5 15:43:53.480: INFO: kindnet-w9k9g             k8s-upgrade-and-conformance-rbkcco-worker-k01c4f  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:37:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:37:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:37:01 +0000 UTC  }]
    Sep  5 15:43:53.480: INFO: kube-proxy-8jjqq          k8s-upgrade-and-conformance-rbkcco-worker-k01c4f  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:59 +0000 UTC  }]
    Sep  5 15:43:53.480: INFO: kube-proxy-xj6mc          k8s-upgrade-and-conformance-rbkcco-worker-fl8tbb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:27 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:23 +0000 UTC  }]
    Sep  5 15:43:53.480: INFO: 
    Sep  5 15:43:55.484: INFO: The status of Pod coredns-78fcd69978-8sbft is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:55.485: INFO: The status of Pod kindnet-q7r82 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:55.485: INFO: The status of Pod kindnet-w9k9g is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:55.485: INFO: The status of Pod kube-proxy-8jjqq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:55.485: INFO: The status of Pod kube-proxy-xj6mc is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:55.485: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (14 seconds elapsed)
    Sep  5 15:43:55.485: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep  5 15:43:55.485: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep  5 15:43:55.485: INFO: coredns-78fcd69978-8sbft  k8s-upgrade-and-conformance-rbkcco-worker-fl8tbb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:30 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:27 +0000 UTC  }]
    Sep  5 15:43:55.485: INFO: kindnet-q7r82             k8s-upgrade-and-conformance-rbkcco-worker-fl8tbb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:36:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:36:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:36:46 +0000 UTC  }]
    Sep  5 15:43:55.485: INFO: kindnet-w9k9g             k8s-upgrade-and-conformance-rbkcco-worker-k01c4f  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:37:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:37:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:37:01 +0000 UTC  }]
    Sep  5 15:43:55.485: INFO: kube-proxy-8jjqq          k8s-upgrade-and-conformance-rbkcco-worker-k01c4f  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:59 +0000 UTC  }]
    Sep  5 15:43:55.485: INFO: kube-proxy-xj6mc          k8s-upgrade-and-conformance-rbkcco-worker-fl8tbb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:27 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:23 +0000 UTC  }]
    Sep  5 15:43:55.485: INFO: 
    Sep  5 15:43:57.477: INFO: The status of Pod coredns-78fcd69978-8sbft is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:57.477: INFO: The status of Pod kindnet-q7r82 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:57.477: INFO: The status of Pod kindnet-w9k9g is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:57.477: INFO: The status of Pod kube-proxy-8jjqq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:57.477: INFO: The status of Pod kube-proxy-xj6mc is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:57.477: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (16 seconds elapsed)
    Sep  5 15:43:57.477: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep  5 15:43:57.477: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep  5 15:43:57.477: INFO: coredns-78fcd69978-8sbft  k8s-upgrade-and-conformance-rbkcco-worker-fl8tbb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:30 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:27 +0000 UTC  }]
    Sep  5 15:43:57.477: INFO: kindnet-q7r82             k8s-upgrade-and-conformance-rbkcco-worker-fl8tbb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:36:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:36:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:36:46 +0000 UTC  }]
    Sep  5 15:43:57.477: INFO: kindnet-w9k9g             k8s-upgrade-and-conformance-rbkcco-worker-k01c4f  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:37:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:37:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:37:01 +0000 UTC  }]
    Sep  5 15:43:57.477: INFO: kube-proxy-8jjqq          k8s-upgrade-and-conformance-rbkcco-worker-k01c4f  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:59 +0000 UTC  }]
    Sep  5 15:43:57.477: INFO: kube-proxy-xj6mc          k8s-upgrade-and-conformance-rbkcco-worker-fl8tbb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:27 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:23 +0000 UTC  }]
    Sep  5 15:43:57.477: INFO: 
    Sep  5 15:43:59.482: INFO: The status of Pod coredns-78fcd69978-8sbft is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:59.482: INFO: The status of Pod kindnet-q7r82 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:59.482: INFO: The status of Pod kindnet-w9k9g is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:59.482: INFO: The status of Pod kube-proxy-8jjqq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:59.483: INFO: The status of Pod kube-proxy-xj6mc is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:43:59.483: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (18 seconds elapsed)
    Sep  5 15:43:59.483: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep  5 15:43:59.483: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep  5 15:43:59.483: INFO: coredns-78fcd69978-8sbft  k8s-upgrade-and-conformance-rbkcco-worker-fl8tbb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:30 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:27 +0000 UTC  }]
    Sep  5 15:43:59.483: INFO: kindnet-q7r82             k8s-upgrade-and-conformance-rbkcco-worker-fl8tbb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:36:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:36:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:36:46 +0000 UTC  }]
    Sep  5 15:43:59.483: INFO: kindnet-w9k9g             k8s-upgrade-and-conformance-rbkcco-worker-k01c4f  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:37:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:37:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:37:01 +0000 UTC  }]
    Sep  5 15:43:59.483: INFO: kube-proxy-8jjqq          k8s-upgrade-and-conformance-rbkcco-worker-k01c4f  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:59 +0000 UTC  }]
    Sep  5 15:43:59.483: INFO: kube-proxy-xj6mc          k8s-upgrade-and-conformance-rbkcco-worker-fl8tbb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:27 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:23 +0000 UTC  }]
    Sep  5 15:43:59.483: INFO: 
    Sep  5 15:44:01.479: INFO: The status of Pod coredns-78fcd69978-8sbft is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:44:01.479: INFO: The status of Pod kindnet-q7r82 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:44:01.479: INFO: The status of Pod kindnet-w9k9g is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:44:01.479: INFO: The status of Pod kube-proxy-8jjqq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:44:01.479: INFO: The status of Pod kube-proxy-xj6mc is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:44:01.479: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (20 seconds elapsed)
    Sep  5 15:44:01.479: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep  5 15:44:01.479: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep  5 15:44:01.479: INFO: coredns-78fcd69978-8sbft  k8s-upgrade-and-conformance-rbkcco-worker-fl8tbb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:30 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:27 +0000 UTC  }]
    Sep  5 15:44:01.479: INFO: kindnet-q7r82             k8s-upgrade-and-conformance-rbkcco-worker-fl8tbb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:36:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:36:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:36:46 +0000 UTC  }]
    Sep  5 15:44:01.479: INFO: kindnet-w9k9g             k8s-upgrade-and-conformance-rbkcco-worker-k01c4f  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:37:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:37:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:37:01 +0000 UTC  }]
    Sep  5 15:44:01.479: INFO: kube-proxy-8jjqq          k8s-upgrade-and-conformance-rbkcco-worker-k01c4f  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:59 +0000 UTC  }]
    Sep  5 15:44:01.479: INFO: kube-proxy-xj6mc          k8s-upgrade-and-conformance-rbkcco-worker-fl8tbb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:42:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:27 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:41:23 +0000 UTC  }]
    Sep  5 15:44:01.479: INFO: 
    Sep  5 15:44:03.492: INFO: The status of Pod coredns-78fcd69978-n5vxq is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:44:03.492: INFO: 15 / 16 pods in namespace 'kube-system' are running and ready (22 seconds elapsed)
    Sep  5 15:44:03.492: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep  5 15:44:03.492: INFO: POD                       NODE                                                            PHASE    GRACE  CONDITIONS
    Sep  5 15:44:03.492: INFO: coredns-78fcd69978-n5vxq  k8s-upgrade-and-conformance-rbkcco-md-0-85nn6-54859d8bd4-grv26  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:44:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:44:01 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:44:01 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:44:01 +0000 UTC  }]
    Sep  5 15:44:03.492: INFO: 
    Sep  5 15:44:05.477: INFO: 16 / 16 pods in namespace 'kube-system' are running and ready (24 seconds elapsed)
... skipping 63 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:44:13.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "limitrange-5216" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":1,"skipped":18,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:44:13.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-2676" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":-1,"completed":2,"skipped":19,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] version v1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 345 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:44:17.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "proxy-9874" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":-1,"completed":1,"skipped":12,"failed":0}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 3 lines ...
    Sep  5 15:44:05.738: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-8ce8cd71-87e3-408b-a3b3-fea27ab10353
    STEP: Creating a pod to test consume configMaps
    Sep  5 15:44:05.799: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a9fc0f3b-553f-4dfc-a476-367fb575be47" in namespace "projected-3787" to be "Succeeded or Failed"

    Sep  5 15:44:05.818: INFO: Pod "pod-projected-configmaps-a9fc0f3b-553f-4dfc-a476-367fb575be47": Phase="Pending", Reason="", readiness=false. Elapsed: 19.114676ms
    Sep  5 15:44:07.831: INFO: Pod "pod-projected-configmaps-a9fc0f3b-553f-4dfc-a476-367fb575be47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032552675s
    Sep  5 15:44:09.847: INFO: Pod "pod-projected-configmaps-a9fc0f3b-553f-4dfc-a476-367fb575be47": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048592576s
    Sep  5 15:44:11.854: INFO: Pod "pod-projected-configmaps-a9fc0f3b-553f-4dfc-a476-367fb575be47": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055612222s
    Sep  5 15:44:13.941: INFO: Pod "pod-projected-configmaps-a9fc0f3b-553f-4dfc-a476-367fb575be47": Phase="Running", Reason="", readiness=true. Elapsed: 8.142013928s
    Sep  5 15:44:15.950: INFO: Pod "pod-projected-configmaps-a9fc0f3b-553f-4dfc-a476-367fb575be47": Phase="Running", Reason="", readiness=true. Elapsed: 10.151006836s
    Sep  5 15:44:17.957: INFO: Pod "pod-projected-configmaps-a9fc0f3b-553f-4dfc-a476-367fb575be47": Phase="Running", Reason="", readiness=false. Elapsed: 12.157787021s
    Sep  5 15:44:19.961: INFO: Pod "pod-projected-configmaps-a9fc0f3b-553f-4dfc-a476-367fb575be47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.161909628s
    STEP: Saw pod success
    Sep  5 15:44:19.961: INFO: Pod "pod-projected-configmaps-a9fc0f3b-553f-4dfc-a476-367fb575be47" satisfied condition "Succeeded or Failed"

    Sep  5 15:44:19.965: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-0xh5an pod pod-projected-configmaps-a9fc0f3b-553f-4dfc-a476-367fb575be47 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  5 15:44:19.993: INFO: Waiting for pod pod-projected-configmaps-a9fc0f3b-553f-4dfc-a476-367fb575be47 to disappear
    Sep  5 15:44:19.996: INFO: Pod pod-projected-configmaps-a9fc0f3b-553f-4dfc-a476-367fb575be47 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:44:19.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-3787" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":15,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:44:22.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "containers-3949" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":17,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:44:22.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-6256" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":3,"skipped":41,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
    STEP: Destroying namespace "webhook-1402-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":4,"skipped":47,"failed":0}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:44:26.066: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-ad6613a0-2053-4bda-a27d-af45d454beca
    STEP: Creating a pod to test consume secrets
    Sep  5 15:44:26.163: INFO: Waiting up to 5m0s for pod "pod-secrets-13bad8bd-fa9d-41b8-b185-6e68da3f68d0" in namespace "secrets-8402" to be "Succeeded or Failed"

    Sep  5 15:44:26.167: INFO: Pod "pod-secrets-13bad8bd-fa9d-41b8-b185-6e68da3f68d0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.611913ms
    Sep  5 15:44:28.179: INFO: Pod "pod-secrets-13bad8bd-fa9d-41b8-b185-6e68da3f68d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01576352s
    Sep  5 15:44:30.185: INFO: Pod "pod-secrets-13bad8bd-fa9d-41b8-b185-6e68da3f68d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021308033s
    STEP: Saw pod success
    Sep  5 15:44:30.185: INFO: Pod "pod-secrets-13bad8bd-fa9d-41b8-b185-6e68da3f68d0" satisfied condition "Succeeded or Failed"

    Sep  5 15:44:30.189: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod pod-secrets-13bad8bd-fa9d-41b8-b185-6e68da3f68d0 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep  5 15:44:30.205: INFO: Waiting for pod pod-secrets-13bad8bd-fa9d-41b8-b185-6e68da3f68d0 to disappear
    Sep  5 15:44:30.209: INFO: Pod pod-secrets-13bad8bd-fa9d-41b8-b185-6e68da3f68d0 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:44:30.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-8402" for this suite.
    STEP: Destroying namespace "secret-namespace-4607" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":59,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 50 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:44:30.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pod-network-test-9286" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":93,"failed":0}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:44:33.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-38" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":2,"skipped":31,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
    STEP: Destroying namespace "services-1554" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":3,"skipped":57,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 28 lines ...
    STEP: Destroying namespace "webhook-5599-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":6,"skipped":88,"failed":0}

    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:44:43.938: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-29d4bf29-f869-4102-953d-0b023f10b978
    STEP: Creating a pod to test consume secrets
    Sep  5 15:44:44.033: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e729a144-9aa9-470a-8e2f-3c13e3973c41" in namespace "projected-8509" to be "Succeeded or Failed"

    Sep  5 15:44:44.040: INFO: Pod "pod-projected-secrets-e729a144-9aa9-470a-8e2f-3c13e3973c41": Phase="Pending", Reason="", readiness=false. Elapsed: 7.485796ms
    Sep  5 15:44:46.047: INFO: Pod "pod-projected-secrets-e729a144-9aa9-470a-8e2f-3c13e3973c41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013715041s
    Sep  5 15:44:48.051: INFO: Pod "pod-projected-secrets-e729a144-9aa9-470a-8e2f-3c13e3973c41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018490048s
    STEP: Saw pod success
    Sep  5 15:44:48.051: INFO: Pod "pod-projected-secrets-e729a144-9aa9-470a-8e2f-3c13e3973c41" satisfied condition "Succeeded or Failed"

    Sep  5 15:44:48.055: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod pod-projected-secrets-e729a144-9aa9-470a-8e2f-3c13e3973c41 container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep  5 15:44:48.072: INFO: Waiting for pod pod-projected-secrets-e729a144-9aa9-470a-8e2f-3c13e3973c41 to disappear
    Sep  5 15:44:48.078: INFO: Pod pod-projected-secrets-e729a144-9aa9-470a-8e2f-3c13e3973c41 no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:44:48.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-8509" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":88,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:44:48.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-4298" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":59,"failed":0}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
    STEP: Destroying namespace "webhook-7112-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":5,"skipped":71,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
    STEP: Deploying the webhook service
    STEP: Verifying the service has paired with the endpoint
    Sep  5 15:44:19.110: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
    [It] listing mutating webhooks should work [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Listing all of the created validation webhooks
    Sep  5 15:44:53.173: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.StatusError | 0xc003ea85a0>: {
            ErrStatus: {
                TypeMeta: {Kind: "", APIVersion: ""},
                ListMeta: {
                    SelfLink: "",
                    ResourceVersion: "",
... skipping 34 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      listing mutating webhooks should work [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  5 15:44:53.173: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.StatusError | 0xc003ea85a0>: {
              ErrStatus: {
                  TypeMeta: {Kind: "", APIVersion: ""},
                  ListMeta: {
                      SelfLink: "",
                      ResourceVersion: "",
... skipping 62 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:44:56.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-6812" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":82,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":2,"skipped":40,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:44:53.263: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 20 lines ...
    STEP: Destroying namespace "webhook-1337-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":3,"skipped":40,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":89,"failed":0}

    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:44:56.198: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename deployment
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 24 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:45:02.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-3763" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":9,"skipped":89,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 43 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:45:20.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pod-network-test-4946" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":149,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Job
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 22 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:45:28.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "job-4193" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":8,"skipped":175,"failed":0}

    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:45:28.030: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  5 15:45:28.073: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b2c5fec9-4070-462b-ae8f-99ec33c86fdc" in namespace "projected-2637" to be "Succeeded or Failed"

    Sep  5 15:45:28.077: INFO: Pod "downwardapi-volume-b2c5fec9-4070-462b-ae8f-99ec33c86fdc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.978402ms
    Sep  5 15:45:30.083: INFO: Pod "downwardapi-volume-b2c5fec9-4070-462b-ae8f-99ec33c86fdc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010436845s
    Sep  5 15:45:32.090: INFO: Pod "downwardapi-volume-b2c5fec9-4070-462b-ae8f-99ec33c86fdc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0165365s
    STEP: Saw pod success
    Sep  5 15:45:32.090: INFO: Pod "downwardapi-volume-b2c5fec9-4070-462b-ae8f-99ec33c86fdc" satisfied condition "Succeeded or Failed"

    Sep  5 15:45:32.093: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod downwardapi-volume-b2c5fec9-4070-462b-ae8f-99ec33c86fdc container client-container: <nil>
    STEP: delete the pod
    Sep  5 15:45:32.113: INFO: Waiting for pod downwardapi-volume-b2c5fec9-4070-462b-ae8f-99ec33c86fdc to disappear
    Sep  5 15:45:32.117: INFO: Pod downwardapi-volume-b2c5fec9-4070-462b-ae8f-99ec33c86fdc no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:45:32.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-2637" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":175,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:45:36.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-6373" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":185,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] CronJob
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:47:00.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "cronjob-3084" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":-1,"completed":11,"skipped":193,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:47:11.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-9033" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":12,"skipped":200,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:47:11.675: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-map-fc08aded-90c0-4d29-a941-8efc9e0840c8
    STEP: Creating a pod to test consume configMaps
    Sep  5 15:47:11.742: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-057d3ff4-4f2d-4c79-9f88-4be91dcc3a9a" in namespace "projected-6994" to be "Succeeded or Failed"

    Sep  5 15:47:11.746: INFO: Pod "pod-projected-configmaps-057d3ff4-4f2d-4c79-9f88-4be91dcc3a9a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.997561ms
    Sep  5 15:47:13.752: INFO: Pod "pod-projected-configmaps-057d3ff4-4f2d-4c79-9f88-4be91dcc3a9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00942842s
    Sep  5 15:47:15.756: INFO: Pod "pod-projected-configmaps-057d3ff4-4f2d-4c79-9f88-4be91dcc3a9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013961059s
    STEP: Saw pod success
    Sep  5 15:47:15.756: INFO: Pod "pod-projected-configmaps-057d3ff4-4f2d-4c79-9f88-4be91dcc3a9a" satisfied condition "Succeeded or Failed"

    Sep  5 15:47:15.760: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-md-0-85nn6-54859d8bd4-grv26 pod pod-projected-configmaps-057d3ff4-4f2d-4c79-9f88-4be91dcc3a9a container agnhost-container: <nil>
    STEP: delete the pod
    Sep  5 15:47:15.792: INFO: Waiting for pod pod-projected-configmaps-057d3ff4-4f2d-4c79-9f88-4be91dcc3a9a to disappear
    Sep  5 15:47:15.795: INFO: Pod pod-projected-configmaps-057d3ff4-4f2d-4c79-9f88-4be91dcc3a9a no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:47:15.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-6994" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":253,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:47:15.809: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide pod UID as env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep  5 15:47:15.858: INFO: Waiting up to 5m0s for pod "downward-api-25392c51-eab3-4e95-9b79-e4ec1762ec31" in namespace "downward-api-7037" to be "Succeeded or Failed"

    Sep  5 15:47:15.862: INFO: Pod "downward-api-25392c51-eab3-4e95-9b79-e4ec1762ec31": Phase="Pending", Reason="", readiness=false. Elapsed: 3.740112ms
    Sep  5 15:47:17.867: INFO: Pod "downward-api-25392c51-eab3-4e95-9b79-e4ec1762ec31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009041184s
    Sep  5 15:47:19.871: INFO: Pod "downward-api-25392c51-eab3-4e95-9b79-e4ec1762ec31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013115262s
    STEP: Saw pod success
    Sep  5 15:47:19.871: INFO: Pod "downward-api-25392c51-eab3-4e95-9b79-e4ec1762ec31" satisfied condition "Succeeded or Failed"

    Sep  5 15:47:19.875: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-md-0-85nn6-54859d8bd4-grv26 pod downward-api-25392c51-eab3-4e95-9b79-e4ec1762ec31 container dapi-container: <nil>
    STEP: delete the pod
    Sep  5 15:47:19.891: INFO: Waiting for pod downward-api-25392c51-eab3-4e95-9b79-e4ec1762ec31 to disappear
    Sep  5 15:47:19.895: INFO: Pod downward-api-25392c51-eab3-4e95-9b79-e4ec1762ec31 no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:47:19.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-7037" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":255,"failed":0}

    [BeforeEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:47:19.907: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable via the environment [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating secret secrets-2276/secret-test-89686cc8-e71e-4b40-94e5-487c7bc0bef9
    STEP: Creating a pod to test consume secrets
    Sep  5 15:47:19.954: INFO: Waiting up to 5m0s for pod "pod-configmaps-83b9cd6f-22aa-4321-910a-8483ab289ec3" in namespace "secrets-2276" to be "Succeeded or Failed"

    Sep  5 15:47:19.958: INFO: Pod "pod-configmaps-83b9cd6f-22aa-4321-910a-8483ab289ec3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.68338ms
    Sep  5 15:47:21.962: INFO: Pod "pod-configmaps-83b9cd6f-22aa-4321-910a-8483ab289ec3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007751988s
    Sep  5 15:47:23.967: INFO: Pod "pod-configmaps-83b9cd6f-22aa-4321-910a-8483ab289ec3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013075837s
    STEP: Saw pod success
    Sep  5 15:47:23.967: INFO: Pod "pod-configmaps-83b9cd6f-22aa-4321-910a-8483ab289ec3" satisfied condition "Succeeded or Failed"

    Sep  5 15:47:23.975: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-md-0-85nn6-54859d8bd4-grv26 pod pod-configmaps-83b9cd6f-22aa-4321-910a-8483ab289ec3 container env-test: <nil>
    STEP: delete the pod
    Sep  5 15:47:24.003: INFO: Waiting for pod pod-configmaps-83b9cd6f-22aa-4321-910a-8483ab289ec3 to disappear
    Sep  5 15:47:24.007: INFO: Pod pod-configmaps-83b9cd6f-22aa-4321-910a-8483ab289ec3 no longer exists
    [AfterEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:47:24.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-2276" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":255,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:47:24.029: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-24627020-aafd-4b87-aefb-0aeeba1d7d6d
    STEP: Creating a pod to test consume configMaps
    Sep  5 15:47:24.112: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-72e36389-f411-4d20-b87d-9dfc0da21569" in namespace "projected-5199" to be "Succeeded or Failed"

    Sep  5 15:47:24.117: INFO: Pod "pod-projected-configmaps-72e36389-f411-4d20-b87d-9dfc0da21569": Phase="Pending", Reason="", readiness=false. Elapsed: 4.222708ms
    Sep  5 15:47:26.121: INFO: Pod "pod-projected-configmaps-72e36389-f411-4d20-b87d-9dfc0da21569": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008622403s
    Sep  5 15:47:28.128: INFO: Pod "pod-projected-configmaps-72e36389-f411-4d20-b87d-9dfc0da21569": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014630465s
    STEP: Saw pod success
    Sep  5 15:47:28.128: INFO: Pod "pod-projected-configmaps-72e36389-f411-4d20-b87d-9dfc0da21569" satisfied condition "Succeeded or Failed"

    Sep  5 15:47:28.131: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-md-0-85nn6-54859d8bd4-grv26 pod pod-projected-configmaps-72e36389-f411-4d20-b87d-9dfc0da21569 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  5 15:47:28.150: INFO: Waiting for pod pod-projected-configmaps-72e36389-f411-4d20-b87d-9dfc0da21569 to disappear
    Sep  5 15:47:28.154: INFO: Pod pod-projected-configmaps-72e36389-f411-4d20-b87d-9dfc0da21569 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:47:28.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-5199" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":258,"failed":0}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:47:28.185: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-cc3c42b1-be07-4eed-891a-e50ec6b4da69
    STEP: Creating a pod to test consume secrets
    Sep  5 15:47:28.235: INFO: Waiting up to 5m0s for pod "pod-secrets-c152fc2f-48d6-4997-bffd-e1350f110c85" in namespace "secrets-9852" to be "Succeeded or Failed"

    Sep  5 15:47:28.239: INFO: Pod "pod-secrets-c152fc2f-48d6-4997-bffd-e1350f110c85": Phase="Pending", Reason="", readiness=false. Elapsed: 3.764262ms
    Sep  5 15:47:30.244: INFO: Pod "pod-secrets-c152fc2f-48d6-4997-bffd-e1350f110c85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008849978s
    Sep  5 15:47:32.249: INFO: Pod "pod-secrets-c152fc2f-48d6-4997-bffd-e1350f110c85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013772402s
    STEP: Saw pod success
    Sep  5 15:47:32.249: INFO: Pod "pod-secrets-c152fc2f-48d6-4997-bffd-e1350f110c85" satisfied condition "Succeeded or Failed"

    Sep  5 15:47:32.252: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-md-0-85nn6-54859d8bd4-grv26 pod pod-secrets-c152fc2f-48d6-4997-bffd-e1350f110c85 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep  5 15:47:32.270: INFO: Waiting for pod pod-secrets-c152fc2f-48d6-4997-bffd-e1350f110c85 to disappear
    Sep  5 15:47:32.273: INFO: Pod pod-secrets-c152fc2f-48d6-4997-bffd-e1350f110c85 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:47:32.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-9852" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":267,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:47:32.338: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should fail to create secret due to empty secret key [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name secret-emptykey-test-9808d1cb-897c-4722-8d54-c15607d14e59
    [AfterEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:47:32.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-9411" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":18,"skipped":300,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:45:02.705: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating the pod with failed condition

    STEP: updating the pod
    Sep  5 15:47:03.300: INFO: Successfully updated pod "var-expansion-83224ed6-7f81-43ee-bb53-577117ee9a96"
    STEP: waiting for pod running
    STEP: deleting the pod gracefully
    Sep  5 15:47:05.308: INFO: Deleting pod "var-expansion-83224ed6-7f81-43ee-bb53-577117ee9a96" in namespace "var-expansion-7348"
    Sep  5 15:47:05.314: INFO: Wait up to 5m0s for pod "var-expansion-83224ed6-7f81-43ee-bb53-577117ee9a96" to be fully deleted
... skipping 6 lines ...
    • [SLOW TEST:154.627 seconds]
    [sig-node] Variable Expansion
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":-1,"completed":10,"skipped":92,"failed":0}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] PodTemplates
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:47:37.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "podtemplate-7340" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":11,"skipped":101,"failed":0}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
    STEP: Destroying namespace "webhook-2065-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":12,"skipped":120,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Job
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:47:44.719: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename job
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a job
    STEP: Ensuring job reaches completions
    [AfterEach] [sig-apps] Job
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:47:54.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "job-6400" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":13,"skipped":165,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:47:56.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-7378" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":171,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:48:02.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-2948" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]","total":-1,"completed":19,"skipped":322,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:48:09.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-9629" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":20,"skipped":358,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide container's memory request [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  5 15:48:09.249: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b7c62567-7bf9-4ff3-8743-308ac9ceafe6" in namespace "projected-7931" to be "Succeeded or Failed"

    Sep  5 15:48:09.253: INFO: Pod "downwardapi-volume-b7c62567-7bf9-4ff3-8743-308ac9ceafe6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.936451ms
    Sep  5 15:48:11.259: INFO: Pod "downwardapi-volume-b7c62567-7bf9-4ff3-8743-308ac9ceafe6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009794185s
    Sep  5 15:48:13.264: INFO: Pod "downwardapi-volume-b7c62567-7bf9-4ff3-8743-308ac9ceafe6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014396486s
    STEP: Saw pod success
    Sep  5 15:48:13.264: INFO: Pod "downwardapi-volume-b7c62567-7bf9-4ff3-8743-308ac9ceafe6" satisfied condition "Succeeded or Failed"

    Sep  5 15:48:13.267: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-md-0-85nn6-54859d8bd4-grv26 pod downwardapi-volume-b7c62567-7bf9-4ff3-8743-308ac9ceafe6 container client-container: <nil>
    STEP: delete the pod
    Sep  5 15:48:13.285: INFO: Waiting for pod downwardapi-volume-b7c62567-7bf9-4ff3-8743-308ac9ceafe6 to disappear
    Sep  5 15:48:13.288: INFO: Pod downwardapi-volume-b7c62567-7bf9-4ff3-8743-308ac9ceafe6 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:48:13.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-7931" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":428,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide container's cpu limit [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  5 15:48:13.403: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b33d566c-f495-44cb-9f5a-96f94182713c" in namespace "projected-7871" to be "Succeeded or Failed"

    Sep  5 15:48:13.407: INFO: Pod "downwardapi-volume-b33d566c-f495-44cb-9f5a-96f94182713c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.265933ms
    Sep  5 15:48:15.413: INFO: Pod "downwardapi-volume-b33d566c-f495-44cb-9f5a-96f94182713c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009772922s
    Sep  5 15:48:17.423: INFO: Pod "downwardapi-volume-b33d566c-f495-44cb-9f5a-96f94182713c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019610806s
    STEP: Saw pod success
    Sep  5 15:48:17.423: INFO: Pod "downwardapi-volume-b33d566c-f495-44cb-9f5a-96f94182713c" satisfied condition "Succeeded or Failed"

    Sep  5 15:48:17.427: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod downwardapi-volume-b33d566c-f495-44cb-9f5a-96f94182713c container client-container: <nil>
    STEP: delete the pod
    Sep  5 15:48:17.452: INFO: Waiting for pod downwardapi-volume-b33d566c-f495-44cb-9f5a-96f94182713c to disappear
    Sep  5 15:48:17.455: INFO: Pod downwardapi-volume-b33d566c-f495-44cb-9f5a-96f94182713c no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:48:17.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-7871" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":430,"failed":0}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:48:18.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-9170" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":-1,"completed":23,"skipped":444,"failed":0}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:48:18.615: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should fail to create ConfigMap with empty key [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap that has name configmap-test-emptyKey-4a62c736-eeea-4e0e-8d62-5b581547d66f
    [AfterEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:48:18.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-7295" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":24,"skipped":462,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:48:18.678: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-9cfe2596-6ff0-4360-83c1-71a9bc1f3259
    STEP: Creating a pod to test consume configMaps
    Sep  5 15:48:18.728: INFO: Waiting up to 5m0s for pod "pod-configmaps-5dbb3e86-9662-44b0-b8c6-355de8610918" in namespace "configmap-8812" to be "Succeeded or Failed"

    Sep  5 15:48:18.732: INFO: Pod "pod-configmaps-5dbb3e86-9662-44b0-b8c6-355de8610918": Phase="Pending", Reason="", readiness=false. Elapsed: 3.847424ms
    Sep  5 15:48:20.739: INFO: Pod "pod-configmaps-5dbb3e86-9662-44b0-b8c6-355de8610918": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010705297s
    Sep  5 15:48:22.746: INFO: Pod "pod-configmaps-5dbb3e86-9662-44b0-b8c6-355de8610918": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017698162s
    STEP: Saw pod success
    Sep  5 15:48:22.746: INFO: Pod "pod-configmaps-5dbb3e86-9662-44b0-b8c6-355de8610918" satisfied condition "Succeeded or Failed"

    Sep  5 15:48:22.752: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod pod-configmaps-5dbb3e86-9662-44b0-b8c6-355de8610918 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  5 15:48:22.786: INFO: Waiting for pod pod-configmaps-5dbb3e86-9662-44b0-b8c6-355de8610918 to disappear
    Sep  5 15:48:22.792: INFO: Pod pod-configmaps-5dbb3e86-9662-44b0-b8c6-355de8610918 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:48:22.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-8812" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":470,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
    • [SLOW TEST:242.739 seconds]
    [sig-node] Probing container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":107,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 45 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:48:37.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-1783" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":15,"skipped":205,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:48:33.581: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-806f4da0-3bd7-4a7f-a01a-7c3ec19f30fd
    STEP: Creating a pod to test consume configMaps
    Sep  5 15:48:33.639: INFO: Waiting up to 5m0s for pod "pod-configmaps-ab2c1cda-cb75-43f2-93c1-1eb33b9d300b" in namespace "configmap-9472" to be "Succeeded or Failed"

    Sep  5 15:48:33.646: INFO: Pod "pod-configmaps-ab2c1cda-cb75-43f2-93c1-1eb33b9d300b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.248792ms
    Sep  5 15:48:35.653: INFO: Pod "pod-configmaps-ab2c1cda-cb75-43f2-93c1-1eb33b9d300b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013058034s
    Sep  5 15:48:37.663: INFO: Pod "pod-configmaps-ab2c1cda-cb75-43f2-93c1-1eb33b9d300b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0230014s
    STEP: Saw pod success
    Sep  5 15:48:37.664: INFO: Pod "pod-configmaps-ab2c1cda-cb75-43f2-93c1-1eb33b9d300b" satisfied condition "Succeeded or Failed"

    Sep  5 15:48:37.670: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod pod-configmaps-ab2c1cda-cb75-43f2-93c1-1eb33b9d300b container configmap-volume-test: <nil>
    STEP: delete the pod
    Sep  5 15:48:37.718: INFO: Waiting for pod pod-configmaps-ab2c1cda-cb75-43f2-93c1-1eb33b9d300b to disappear
    Sep  5 15:48:37.722: INFO: Pod pod-configmaps-ab2c1cda-cb75-43f2-93c1-1eb33b9d300b no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:48:37.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-9472" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":133,"failed":0}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:48:41.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-5010" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":16,"skipped":226,"failed":0}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:48:37.778: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0666 on node default medium
    Sep  5 15:48:37.875: INFO: Waiting up to 5m0s for pod "pod-4a2d75ff-ca78-4888-8432-3a12a168b8be" in namespace "emptydir-4212" to be "Succeeded or Failed"

    Sep  5 15:48:37.880: INFO: Pod "pod-4a2d75ff-ca78-4888-8432-3a12a168b8be": Phase="Pending", Reason="", readiness=false. Elapsed: 5.237094ms
    Sep  5 15:48:39.886: INFO: Pod "pod-4a2d75ff-ca78-4888-8432-3a12a168b8be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011263901s
    Sep  5 15:48:41.891: INFO: Pod "pod-4a2d75ff-ca78-4888-8432-3a12a168b8be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016145055s
    STEP: Saw pod success
    Sep  5 15:48:41.891: INFO: Pod "pod-4a2d75ff-ca78-4888-8432-3a12a168b8be" satisfied condition "Succeeded or Failed"

    Sep  5 15:48:41.894: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-md-0-85nn6-54859d8bd4-grv26 pod pod-4a2d75ff-ca78-4888-8432-3a12a168b8be container test-container: <nil>
    STEP: delete the pod
    Sep  5 15:48:41.910: INFO: Waiting for pod pod-4a2d75ff-ca78-4888-8432-3a12a168b8be to disappear
    Sep  5 15:48:41.914: INFO: Pod pod-4a2d75ff-ca78-4888-8432-3a12a168b8be no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:48:41.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-4212" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":147,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
    STEP: Deploying the webhook pod
    STEP: Wait for the deployment to be ready
    Sep  5 15:48:42.244: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
    STEP: Deploying the webhook service
    STEP: Verifying the service has paired with the endpoint
    Sep  5 15:48:45.274: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
    [It] should unconditionally reject operations on fail closed webhook [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API

    STEP: create a namespace for the webhook
    STEP: create a configmap should be unconditionally rejected by the webhook
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:48:45.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "webhook-9099" for this suite.
    STEP: Destroying namespace "webhook-9099-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":17,"skipped":235,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] DisruptionController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:48:48.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "disruption-368" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":5,"skipped":149,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:48:48.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-2546" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":18,"skipped":268,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  5 15:48:48.606: INFO: Waiting up to 5m0s for pod "downwardapi-volume-518eef6e-245c-4efa-94c5-508d80a7672e" in namespace "downward-api-7617" to be "Succeeded or Failed"

    Sep  5 15:48:48.610: INFO: Pod "downwardapi-volume-518eef6e-245c-4efa-94c5-508d80a7672e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.676846ms
    Sep  5 15:48:50.615: INFO: Pod "downwardapi-volume-518eef6e-245c-4efa-94c5-508d80a7672e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008836671s
    Sep  5 15:48:52.620: INFO: Pod "downwardapi-volume-518eef6e-245c-4efa-94c5-508d80a7672e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014189619s
    STEP: Saw pod success
    Sep  5 15:48:52.620: INFO: Pod "downwardapi-volume-518eef6e-245c-4efa-94c5-508d80a7672e" satisfied condition "Succeeded or Failed"

    Sep  5 15:48:52.624: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod downwardapi-volume-518eef6e-245c-4efa-94c5-508d80a7672e container client-container: <nil>
    STEP: delete the pod
    Sep  5 15:48:52.644: INFO: Waiting for pod downwardapi-volume-518eef6e-245c-4efa-94c5-508d80a7672e to disappear
    Sep  5 15:48:52.647: INFO: Pod downwardapi-volume-518eef6e-245c-4efa-94c5-508d80a7672e no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:48:52.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-7617" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":271,"failed":0}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:48:55.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-189" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":20,"skipped":280,"failed":0}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:48:55.809: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-a6ea9196-9968-418b-99be-65f4dfd60541
    STEP: Creating a pod to test consume secrets
    Sep  5 15:48:55.862: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d0d68c3e-b5b9-4a5f-8a34-7f8ea791cd0b" in namespace "projected-7258" to be "Succeeded or Failed"

    Sep  5 15:48:55.866: INFO: Pod "pod-projected-secrets-d0d68c3e-b5b9-4a5f-8a34-7f8ea791cd0b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.659264ms
    Sep  5 15:48:57.870: INFO: Pod "pod-projected-secrets-d0d68c3e-b5b9-4a5f-8a34-7f8ea791cd0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007734189s
    Sep  5 15:48:59.879: INFO: Pod "pod-projected-secrets-d0d68c3e-b5b9-4a5f-8a34-7f8ea791cd0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016994522s
    STEP: Saw pod success
    Sep  5 15:48:59.879: INFO: Pod "pod-projected-secrets-d0d68c3e-b5b9-4a5f-8a34-7f8ea791cd0b" satisfied condition "Succeeded or Failed"

    Sep  5 15:48:59.885: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod pod-projected-secrets-d0d68c3e-b5b9-4a5f-8a34-7f8ea791cd0b container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep  5 15:48:59.921: INFO: Waiting for pod pod-projected-secrets-d0d68c3e-b5b9-4a5f-8a34-7f8ea791cd0b to disappear
    Sep  5 15:48:59.929: INFO: Pod pod-projected-secrets-d0d68c3e-b5b9-4a5f-8a34-7f8ea791cd0b no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:48:59.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-7258" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":296,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-instrumentation] Events API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:49:00.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-3500" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":22,"skipped":320,"failed":0}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:49:00.306: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0666 on tmpfs
    Sep  5 15:49:00.398: INFO: Waiting up to 5m0s for pod "pod-b0dace27-156a-4986-9bc4-004c471e2258" in namespace "emptydir-7645" to be "Succeeded or Failed"

    Sep  5 15:49:00.406: INFO: Pod "pod-b0dace27-156a-4986-9bc4-004c471e2258": Phase="Pending", Reason="", readiness=false. Elapsed: 8.013167ms
    Sep  5 15:49:02.417: INFO: Pod "pod-b0dace27-156a-4986-9bc4-004c471e2258": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019168022s
    Sep  5 15:49:04.433: INFO: Pod "pod-b0dace27-156a-4986-9bc4-004c471e2258": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034299193s
    STEP: Saw pod success
    Sep  5 15:49:04.433: INFO: Pod "pod-b0dace27-156a-4986-9bc4-004c471e2258" satisfied condition "Succeeded or Failed"

    Sep  5 15:49:04.444: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod pod-b0dace27-156a-4986-9bc4-004c471e2258 container test-container: <nil>
    STEP: delete the pod
    Sep  5 15:49:04.520: INFO: Waiting for pod pod-b0dace27-156a-4986-9bc4-004c471e2258 to disappear
    Sep  5 15:49:04.530: INFO: Pod pod-b0dace27-156a-4986-9bc4-004c471e2258 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:49:04.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-7645" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":336,"failed":0}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-instrumentation] Events
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:49:04.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-1157" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":24,"skipped":345,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-configmap-7hcd
    STEP: Creating a pod to test atomic-volume-subpath
    Sep  5 15:48:48.178: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-7hcd" in namespace "subpath-916" to be "Succeeded or Failed"

    Sep  5 15:48:48.183: INFO: Pod "pod-subpath-test-configmap-7hcd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.726863ms
    Sep  5 15:48:50.188: INFO: Pod "pod-subpath-test-configmap-7hcd": Phase="Running", Reason="", readiness=true. Elapsed: 2.009352905s
    Sep  5 15:48:52.194: INFO: Pod "pod-subpath-test-configmap-7hcd": Phase="Running", Reason="", readiness=true. Elapsed: 4.015057402s
    Sep  5 15:48:54.207: INFO: Pod "pod-subpath-test-configmap-7hcd": Phase="Running", Reason="", readiness=true. Elapsed: 6.027594103s
    Sep  5 15:48:56.211: INFO: Pod "pod-subpath-test-configmap-7hcd": Phase="Running", Reason="", readiness=true. Elapsed: 8.032159366s
    Sep  5 15:48:58.218: INFO: Pod "pod-subpath-test-configmap-7hcd": Phase="Running", Reason="", readiness=true. Elapsed: 10.038641012s
... skipping 2 lines ...
    Sep  5 15:49:04.254: INFO: Pod "pod-subpath-test-configmap-7hcd": Phase="Running", Reason="", readiness=true. Elapsed: 16.075371452s
    Sep  5 15:49:06.263: INFO: Pod "pod-subpath-test-configmap-7hcd": Phase="Running", Reason="", readiness=true. Elapsed: 18.084262496s
    Sep  5 15:49:08.269: INFO: Pod "pod-subpath-test-configmap-7hcd": Phase="Running", Reason="", readiness=true. Elapsed: 20.090346984s
    Sep  5 15:49:10.275: INFO: Pod "pod-subpath-test-configmap-7hcd": Phase="Running", Reason="", readiness=false. Elapsed: 22.095621298s
    Sep  5 15:49:12.279: INFO: Pod "pod-subpath-test-configmap-7hcd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.100199196s
    STEP: Saw pod success
    Sep  5 15:49:12.279: INFO: Pod "pod-subpath-test-configmap-7hcd" satisfied condition "Succeeded or Failed"

    Sep  5 15:49:12.283: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod pod-subpath-test-configmap-7hcd container test-container-subpath-configmap-7hcd: <nil>
    STEP: delete the pod
    Sep  5 15:49:12.307: INFO: Waiting for pod pod-subpath-test-configmap-7hcd to disappear
    Sep  5 15:49:12.310: INFO: Pod pod-subpath-test-configmap-7hcd no longer exists
    STEP: Deleting pod pod-subpath-test-configmap-7hcd
    Sep  5 15:49:12.310: INFO: Deleting pod "pod-subpath-test-configmap-7hcd" in namespace "subpath-916"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:49:12.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-916" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":6,"skipped":172,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:49:22.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-1447" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":26,"skipped":480,"failed":0}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:49:23.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "sysctl-5165" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":27,"skipped":493,"failed":0}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods Extended
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:49:23.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-9076" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":28,"skipped":506,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:49:23.131: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename container-runtime
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: create the container
    STEP: wait for the container to reach Failed

    STEP: get the container status
    STEP: the container should be terminated
    STEP: the termination message should be set
    Sep  5 15:49:27.211: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
    STEP: delete the container
    [AfterEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:49:27.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-501" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":514,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with secret pod [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-secret-2whb
    STEP: Creating a pod to test atomic-volume-subpath
    Sep  5 15:49:05.198: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-2whb" in namespace "subpath-9410" to be "Succeeded or Failed"

    Sep  5 15:49:05.208: INFO: Pod "pod-subpath-test-secret-2whb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.275486ms
    Sep  5 15:49:07.221: INFO: Pod "pod-subpath-test-secret-2whb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022904121s
    Sep  5 15:49:09.226: INFO: Pod "pod-subpath-test-secret-2whb": Phase="Running", Reason="", readiness=true. Elapsed: 4.027951192s
    Sep  5 15:49:11.231: INFO: Pod "pod-subpath-test-secret-2whb": Phase="Running", Reason="", readiness=true. Elapsed: 6.033519344s
    Sep  5 15:49:13.237: INFO: Pod "pod-subpath-test-secret-2whb": Phase="Running", Reason="", readiness=true. Elapsed: 8.038879863s
    Sep  5 15:49:15.242: INFO: Pod "pod-subpath-test-secret-2whb": Phase="Running", Reason="", readiness=true. Elapsed: 10.044494099s
... skipping 3 lines ...
    Sep  5 15:49:23.264: INFO: Pod "pod-subpath-test-secret-2whb": Phase="Running", Reason="", readiness=true. Elapsed: 18.066035252s
    Sep  5 15:49:25.269: INFO: Pod "pod-subpath-test-secret-2whb": Phase="Running", Reason="", readiness=true. Elapsed: 20.071531345s
    Sep  5 15:49:27.276: INFO: Pod "pod-subpath-test-secret-2whb": Phase="Running", Reason="", readiness=true. Elapsed: 22.07850581s
    Sep  5 15:49:29.282: INFO: Pod "pod-subpath-test-secret-2whb": Phase="Running", Reason="", readiness=false. Elapsed: 24.083866593s
    Sep  5 15:49:31.289: INFO: Pod "pod-subpath-test-secret-2whb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.091266787s
    STEP: Saw pod success
    Sep  5 15:49:31.289: INFO: Pod "pod-subpath-test-secret-2whb" satisfied condition "Succeeded or Failed"

    Sep  5 15:49:31.293: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod pod-subpath-test-secret-2whb container test-container-subpath-secret-2whb: <nil>
    STEP: delete the pod
    Sep  5 15:49:31.316: INFO: Waiting for pod pod-subpath-test-secret-2whb to disappear
    Sep  5 15:49:31.320: INFO: Pod pod-subpath-test-secret-2whb no longer exists
    STEP: Deleting pod pod-subpath-test-secret-2whb
    Sep  5 15:49:31.320: INFO: Deleting pod "pod-subpath-test-secret-2whb" in namespace "subpath-9410"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:49:31.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-9410" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":25,"skipped":368,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 34 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:49:37.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-4393" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":30,"skipped":553,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:49:44.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-6418" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":31,"skipped":617,"failed":0}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:49:51.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-3338" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":32,"skipped":626,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 52 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:49:57.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-2443" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":-1,"completed":33,"skipped":630,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Discovery
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 85 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:49:58.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "discovery-2948" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":34,"skipped":661,"failed":0}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 28 lines ...
    
    STEP: creating a pod to probe DNS
    STEP: submitting the pod to kubernetes
    STEP: retrieving the pod
    STEP: looking for the results for each expected name from probers
    Sep  5 15:48:41.521: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-8976.svc.cluster.local from pod dns-8976/dns-test-78d1aab0-c9a1-4d03-b552-15a8fbdcf3b1: the server is currently unable to handle the request (get pods dns-test-78d1aab0-c9a1-4d03-b552-15a8fbdcf3b1)
    Sep  5 15:50:07.399: FAIL: Unable to read jessie_udp@dns-test-service-3.dns-8976.svc.cluster.local from pod dns-8976/dns-test-78d1aab0-c9a1-4d03-b552-15a8fbdcf3b1: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-8976/pods/dns-test-78d1aab0-c9a1-4d03-b552-15a8fbdcf3b1/proxy/results/jessie_udp@dns-test-service-3.dns-8976.svc.cluster.local": context deadline exceeded

    
    Full Stack Trace
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x78de4a8, 0xc00011c010, 0x7f85c239a5b8, 0x18, 0xc004a8c3a8)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x78de4a8, 0xc00011c010, 0xc004a88640, 0x2a14500, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f
... skipping 15 lines ...
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b
    testing.tRunner(0xc000c4ac00, 0x729a2d8)
    	/usr/local/go/src/testing/testing.go:1203 +0xe5
    created by testing.(*T).Run
    	/usr/local/go/src/testing/testing.go:1248 +0x2b3
    E0905 15:50:07.400308      17 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Sep  5 15:50:07.399: Unable to read jessie_udp@dns-test-service-3.dns-8976.svc.cluster.local from pod dns-8976/dns-test-78d1aab0-c9a1-4d03-b552-15a8fbdcf3b1: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-8976/pods/dns-test-78d1aab0-c9a1-4d03-b552-15a8fbdcf3b1/proxy/results/jessie_udp@dns-test-service-3.dns-8976.svc.cluster.local\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:217, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x78de4a8, 0xc00011c010, 0x7f85c239a5b8, 0x18, 0xc004a8c3a8)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x78de4a8, 0xc00011c010, 0xc004a88640, 0x2a14500, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll(0x78de4a8, 0xc00011c010, 0xc004a8c301, 0xc004a8c3a8, 0xc004a88640, 0x6826620, 0xc004a88640)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:577 +0xe5\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext(0x78de4a8, 0xc00011c010, 0x12a05f200, 0x8bb2c97000, 0xc004a88640, 0x6d6e4e0, 0x2521201)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:523 +0x66\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc0023a2770, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:509 +0x6f\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc0039fa580, 0x2, 0x2, 0x702fe9b, 0x7, 0xc004aac800, 0x7971668, 0xc0049f29a0, 0x1, 0x70515b7, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463 +0x13c\nk8s.io/kubernetes/test/e2e/network.validateTargetedProbeOutput(0xc0012271e0, 0xc004aac800, 0xc0039fa580, 0x2, 0x2, 0x70515b7, 0x10)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:548 +0x376\nk8s.io/kubernetes/test/e2e/network.glob..func2.9()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:354 +0x6ed\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc000c4ac00)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc000c4ac00)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b\ntesting.tRunner(0xc000c4ac00, 0x729a2d8)\n\t/usr/local/go/src/testing/testing.go:1203 +0xe5\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1248 +0x2b3"} (
    Your test failed.

    Ginkgo panics to prevent subsequent assertions from running.
    Normally Ginkgo rescues this panic so you shouldn't see it.
    
    But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
    To circumvent this, you should call
    
... skipping 5 lines ...
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6bbe4c0, 0xc004a9a100)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
    panic(0x6bbe4c0, 0xc004a9a100)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc004940300, 0x16b, 0x88abe86, 0x7d, 0xd9, 0xc004954a80, 0x9fe)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
    panic(0x62ef260, 0x77956f0)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc004940300, 0x16b, 0xc003ffde88, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:260 +0xc8
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc004940300, 0x16b, 0xc003ffdf70, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
    k8s.io/kubernetes/test/e2e/framework.Failf(0x70d3e4f, 0x24, 0xc003ffe1d0, 0x4, 0x4)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
    k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:480 +0xab1
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x78de4a8, 0xc00011c010, 0x7f85c239a5b8, 0x18, 0xc004a8c3a8)
... skipping 57 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  5 15:50:07.399: Unable to read jessie_udp@dns-test-service-3.dns-8976.svc.cluster.local from pod dns-8976/dns-test-78d1aab0-c9a1-4d03-b552-15a8fbdcf3b1: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-8976/pods/dns-test-78d1aab0-c9a1-4d03-b552-15a8fbdcf3b1/proxy/results/jessie_udp@dns-test-service-3.dns-8976.svc.cluster.local": context deadline exceeded
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217
    ------------------------------
    {"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":-1,"completed":35,"skipped":674,"failed":0}

    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:49:58.363: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename watch
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:50:08.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-5377" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":36,"skipped":674,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 63 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:50:10.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-2621" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]","total":-1,"completed":37,"skipped":675,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:50:12.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-probe-3097" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":196,"failed":0}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:50:12.510: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0777 on tmpfs
    Sep  5 15:50:12.557: INFO: Waiting up to 5m0s for pod "pod-fd82fdac-7d93-4c26-aca8-4cae3b33044b" in namespace "emptydir-7175" to be "Succeeded or Failed"

    Sep  5 15:50:12.562: INFO: Pod "pod-fd82fdac-7d93-4c26-aca8-4cae3b33044b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.81805ms
    Sep  5 15:50:14.568: INFO: Pod "pod-fd82fdac-7d93-4c26-aca8-4cae3b33044b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011289696s
    Sep  5 15:50:16.573: INFO: Pod "pod-fd82fdac-7d93-4c26-aca8-4cae3b33044b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015904866s
    STEP: Saw pod success
    Sep  5 15:50:16.573: INFO: Pod "pod-fd82fdac-7d93-4c26-aca8-4cae3b33044b" satisfied condition "Succeeded or Failed"

    Sep  5 15:50:16.577: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod pod-fd82fdac-7d93-4c26-aca8-4cae3b33044b container test-container: <nil>
    STEP: delete the pod
    Sep  5 15:50:16.594: INFO: Waiting for pod pod-fd82fdac-7d93-4c26-aca8-4cae3b33044b to disappear
    Sep  5 15:50:16.598: INFO: Pod pod-fd82fdac-7d93-4c26-aca8-4cae3b33044b no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:50:16.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-7175" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":213,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] EndpointSlice
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:50:16.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslice-4875" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":9,"skipped":240,"failed":0}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 26 lines ...
    STEP: Destroying namespace "services-2128" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":10,"skipped":254,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:50:26.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-1916" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":257,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 38 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:50:29.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pod-network-test-6645" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":683,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:50:26.718: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-map-12694c55-6d79-40ce-9bc4-209d1e57b8c0
    STEP: Creating a pod to test consume configMaps
    Sep  5 15:50:26.769: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-593d1daf-4549-4557-ab18-81ca201be5e6" in namespace "projected-4240" to be "Succeeded or Failed"

    Sep  5 15:50:26.774: INFO: Pod "pod-projected-configmaps-593d1daf-4549-4557-ab18-81ca201be5e6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.396948ms
    Sep  5 15:50:28.780: INFO: Pod "pod-projected-configmaps-593d1daf-4549-4557-ab18-81ca201be5e6": Phase="Running", Reason="", readiness=false. Elapsed: 2.011233349s
    Sep  5 15:50:30.801: INFO: Pod "pod-projected-configmaps-593d1daf-4549-4557-ab18-81ca201be5e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03171343s
    STEP: Saw pod success
    Sep  5 15:50:30.801: INFO: Pod "pod-projected-configmaps-593d1daf-4549-4557-ab18-81ca201be5e6" satisfied condition "Succeeded or Failed"

    Sep  5 15:50:30.805: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod pod-projected-configmaps-593d1daf-4549-4557-ab18-81ca201be5e6 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  5 15:50:30.833: INFO: Waiting for pod pod-projected-configmaps-593d1daf-4549-4557-ab18-81ca201be5e6 to disappear
    Sep  5 15:50:30.844: INFO: Pod pod-projected-configmaps-593d1daf-4549-4557-ab18-81ca201be5e6 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:50:30.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-4240" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":297,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:50:29.207: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
    Sep  5 15:50:29.248: INFO: Waiting up to 5m0s for pod "security-context-ecf3d54f-6656-4c5f-95c8-7d44a1066a8f" in namespace "security-context-1652" to be "Succeeded or Failed"

    Sep  5 15:50:29.253: INFO: Pod "security-context-ecf3d54f-6656-4c5f-95c8-7d44a1066a8f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.480084ms
    Sep  5 15:50:31.267: INFO: Pod "security-context-ecf3d54f-6656-4c5f-95c8-7d44a1066a8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019465508s
    Sep  5 15:50:33.272: INFO: Pod "security-context-ecf3d54f-6656-4c5f-95c8-7d44a1066a8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02396258s
    STEP: Saw pod success
    Sep  5 15:50:33.272: INFO: Pod "security-context-ecf3d54f-6656-4c5f-95c8-7d44a1066a8f" satisfied condition "Succeeded or Failed"

    Sep  5 15:50:33.275: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-md-0-85nn6-54859d8bd4-vcz8p pod security-context-ecf3d54f-6656-4c5f-95c8-7d44a1066a8f container test-container: <nil>
    STEP: delete the pod
    Sep  5 15:50:33.294: INFO: Waiting for pod security-context-ecf3d54f-6656-4c5f-95c8-7d44a1066a8f to disappear
    Sep  5 15:50:33.297: INFO: Pod security-context-ecf3d54f-6656-4c5f-95c8-7d44a1066a8f no longer exists
    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:50:33.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-1652" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":39,"skipped":713,"failed":0}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:50:30.933: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir volume type on node default medium
    Sep  5 15:50:30.981: INFO: Waiting up to 5m0s for pod "pod-35a1e960-45a4-4a19-ba96-31374c9abfa8" in namespace "emptydir-7209" to be "Succeeded or Failed"

    Sep  5 15:50:30.986: INFO: Pod "pod-35a1e960-45a4-4a19-ba96-31374c9abfa8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.245021ms
    Sep  5 15:50:32.990: INFO: Pod "pod-35a1e960-45a4-4a19-ba96-31374c9abfa8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008703445s
    Sep  5 15:50:34.997: INFO: Pod "pod-35a1e960-45a4-4a19-ba96-31374c9abfa8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015253657s
    STEP: Saw pod success
    Sep  5 15:50:34.997: INFO: Pod "pod-35a1e960-45a4-4a19-ba96-31374c9abfa8" satisfied condition "Succeeded or Failed"

    Sep  5 15:50:35.002: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod pod-35a1e960-45a4-4a19-ba96-31374c9abfa8 container test-container: <nil>
    STEP: delete the pod
    Sep  5 15:50:35.031: INFO: Waiting for pod pod-35a1e960-45a4-4a19-ba96-31374c9abfa8 to disappear
    Sep  5 15:50:35.033: INFO: Pod pod-35a1e960-45a4-4a19-ba96-31374c9abfa8 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:50:35.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-7209" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":319,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:50:33.330: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable via environment variable [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap configmap-7112/configmap-test-c381347c-4771-4205-ac45-13ca2e61f38f
    STEP: Creating a pod to test consume configMaps
    Sep  5 15:50:33.377: INFO: Waiting up to 5m0s for pod "pod-configmaps-f3aa9890-898b-4178-bf1c-918a2593b58c" in namespace "configmap-7112" to be "Succeeded or Failed"

    Sep  5 15:50:33.380: INFO: Pod "pod-configmaps-f3aa9890-898b-4178-bf1c-918a2593b58c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.591032ms
    Sep  5 15:50:35.383: INFO: Pod "pod-configmaps-f3aa9890-898b-4178-bf1c-918a2593b58c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006190116s
    Sep  5 15:50:37.388: INFO: Pod "pod-configmaps-f3aa9890-898b-4178-bf1c-918a2593b58c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011010375s
    STEP: Saw pod success
    Sep  5 15:50:37.388: INFO: Pod "pod-configmaps-f3aa9890-898b-4178-bf1c-918a2593b58c" satisfied condition "Succeeded or Failed"

    Sep  5 15:50:37.392: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod pod-configmaps-f3aa9890-898b-4178-bf1c-918a2593b58c container env-test: <nil>
    STEP: delete the pod
    Sep  5 15:50:37.410: INFO: Waiting for pod pod-configmaps-f3aa9890-898b-4178-bf1c-918a2593b58c to disappear
    Sep  5 15:50:37.414: INFO: Pod pod-configmaps-f3aa9890-898b-4178-bf1c-918a2593b58c no longer exists
    [AfterEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:50:37.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-7112" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":727,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:50:38.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-181" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":-1,"completed":14,"skipped":327,"failed":0}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:50:42.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-9954" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":41,"skipped":731,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:50:48.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-7876" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":15,"skipped":343,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide podname only [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  5 15:50:48.350: INFO: Waiting up to 5m0s for pod "downwardapi-volume-563d4127-f3e5-4999-b729-1c98403123b7" in namespace "projected-88" to be "Succeeded or Failed"

    Sep  5 15:50:48.354: INFO: Pod "downwardapi-volume-563d4127-f3e5-4999-b729-1c98403123b7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.323728ms
    Sep  5 15:50:50.359: INFO: Pod "downwardapi-volume-563d4127-f3e5-4999-b729-1c98403123b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008390693s
    Sep  5 15:50:52.364: INFO: Pod "downwardapi-volume-563d4127-f3e5-4999-b729-1c98403123b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013237065s
    STEP: Saw pod success
    Sep  5 15:50:52.364: INFO: Pod "downwardapi-volume-563d4127-f3e5-4999-b729-1c98403123b7" satisfied condition "Succeeded or Failed"

    Sep  5 15:50:52.367: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod downwardapi-volume-563d4127-f3e5-4999-b729-1c98403123b7 container client-container: <nil>
    STEP: delete the pod
    Sep  5 15:50:52.388: INFO: Waiting for pod downwardapi-volume-563d4127-f3e5-4999-b729-1c98403123b7 to disappear
    Sep  5 15:50:52.392: INFO: Pod downwardapi-volume-563d4127-f3e5-4999-b729-1c98403123b7 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:50:52.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-88" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":374,"failed":0}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:50:55.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-9158" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":372,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 41 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:51:02.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-6341" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":17,"skipped":383,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:51:04.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-probe-2128" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":734,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 19 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:51:06.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-9364" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":27,"skipped":402,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide container's cpu limit [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  5 15:51:03.047: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f0b72232-58a8-4e6b-b80a-bbef1ff3a8f5" in namespace "downward-api-3980" to be "Succeeded or Failed"

    Sep  5 15:51:03.052: INFO: Pod "downwardapi-volume-f0b72232-58a8-4e6b-b80a-bbef1ff3a8f5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.811118ms
    Sep  5 15:51:05.058: INFO: Pod "downwardapi-volume-f0b72232-58a8-4e6b-b80a-bbef1ff3a8f5": Phase="Running", Reason="", readiness=false. Elapsed: 2.010564338s
    Sep  5 15:51:07.062: INFO: Pod "downwardapi-volume-f0b72232-58a8-4e6b-b80a-bbef1ff3a8f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014698962s
    STEP: Saw pod success
    Sep  5 15:51:07.062: INFO: Pod "downwardapi-volume-f0b72232-58a8-4e6b-b80a-bbef1ff3a8f5" satisfied condition "Succeeded or Failed"

    Sep  5 15:51:07.065: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-md-0-85nn6-54859d8bd4-vcz8p pod downwardapi-volume-f0b72232-58a8-4e6b-b80a-bbef1ff3a8f5 container client-container: <nil>
    STEP: delete the pod
    Sep  5 15:51:07.079: INFO: Waiting for pod downwardapi-volume-f0b72232-58a8-4e6b-b80a-bbef1ff3a8f5 to disappear
    Sep  5 15:51:07.082: INFO: Pod downwardapi-volume-f0b72232-58a8-4e6b-b80a-bbef1ff3a8f5 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:51:07.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-3980" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":439,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 101 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:51:14.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-2770" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":19,"skipped":493,"failed":0}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Service endpoints latency
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 418 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:51:16.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svc-latency-6517" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":-1,"completed":28,"skipped":413,"failed":0}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide container's memory limit [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  5 15:51:14.321: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8607700d-12d2-4468-b51c-91d7682897d9" in namespace "projected-6136" to be "Succeeded or Failed"

    Sep  5 15:51:14.330: INFO: Pod "downwardapi-volume-8607700d-12d2-4468-b51c-91d7682897d9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.955983ms
    Sep  5 15:51:16.362: INFO: Pod "downwardapi-volume-8607700d-12d2-4468-b51c-91d7682897d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04054753s
    Sep  5 15:51:18.367: INFO: Pod "downwardapi-volume-8607700d-12d2-4468-b51c-91d7682897d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046173204s
    STEP: Saw pod success
    Sep  5 15:51:18.367: INFO: Pod "downwardapi-volume-8607700d-12d2-4468-b51c-91d7682897d9" satisfied condition "Succeeded or Failed"

    Sep  5 15:51:18.371: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-md-0-85nn6-54859d8bd4-vcz8p pod downwardapi-volume-8607700d-12d2-4468-b51c-91d7682897d9 container client-container: <nil>
    STEP: delete the pod
    Sep  5 15:51:18.392: INFO: Waiting for pod downwardapi-volume-8607700d-12d2-4468-b51c-91d7682897d9 to disappear
    Sep  5 15:51:18.395: INFO: Pod downwardapi-volume-8607700d-12d2-4468-b51c-91d7682897d9 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:51:18.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-6136" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":502,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:51:24.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-3149" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":29,"skipped":432,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 69 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:51:25.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-9824" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":21,"skipped":510,"failed":0}

    
    S
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":43,"skipped":740,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:51:26.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-2840" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":-1,"completed":22,"skipped":515,"failed":0}

    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:51:26.243: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename kubelet-test
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 7 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:51:28.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-5316" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":515,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:51:24.288: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-map-7d008e84-5f5d-48ee-b915-898b63474db2
    STEP: Creating a pod to test consume configMaps
    Sep  5 15:51:24.382: INFO: Waiting up to 5m0s for pod "pod-configmaps-badc9b54-7d9b-40d3-926a-5f2bb5e5a1a1" in namespace "configmap-7853" to be "Succeeded or Failed"

    Sep  5 15:51:24.389: INFO: Pod "pod-configmaps-badc9b54-7d9b-40d3-926a-5f2bb5e5a1a1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.608519ms
    Sep  5 15:51:26.395: INFO: Pod "pod-configmaps-badc9b54-7d9b-40d3-926a-5f2bb5e5a1a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013228943s
    Sep  5 15:51:28.405: INFO: Pod "pod-configmaps-badc9b54-7d9b-40d3-926a-5f2bb5e5a1a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022486207s
    STEP: Saw pod success
    Sep  5 15:51:28.405: INFO: Pod "pod-configmaps-badc9b54-7d9b-40d3-926a-5f2bb5e5a1a1" satisfied condition "Succeeded or Failed"

    Sep  5 15:51:28.413: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod pod-configmaps-badc9b54-7d9b-40d3-926a-5f2bb5e5a1a1 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  5 15:51:28.453: INFO: Waiting for pod pod-configmaps-badc9b54-7d9b-40d3-926a-5f2bb5e5a1a1 to disappear
    Sep  5 15:51:28.464: INFO: Pod pod-configmaps-badc9b54-7d9b-40d3-926a-5f2bb5e5a1a1 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:51:28.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-7853" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":443,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide container's cpu request [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  5 15:51:28.598: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7758983a-1ee1-48ca-950b-df3d92e2f576" in namespace "downward-api-4319" to be "Succeeded or Failed"

    Sep  5 15:51:28.605: INFO: Pod "downwardapi-volume-7758983a-1ee1-48ca-950b-df3d92e2f576": Phase="Pending", Reason="", readiness=false. Elapsed: 6.715382ms
    Sep  5 15:51:30.609: INFO: Pod "downwardapi-volume-7758983a-1ee1-48ca-950b-df3d92e2f576": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010623025s
    Sep  5 15:51:32.613: INFO: Pod "downwardapi-volume-7758983a-1ee1-48ca-950b-df3d92e2f576": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015259564s
    STEP: Saw pod success
    Sep  5 15:51:32.613: INFO: Pod "downwardapi-volume-7758983a-1ee1-48ca-950b-df3d92e2f576" satisfied condition "Succeeded or Failed"

    Sep  5 15:51:32.617: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod downwardapi-volume-7758983a-1ee1-48ca-950b-df3d92e2f576 container client-container: <nil>
    STEP: delete the pod
    Sep  5 15:51:32.629: INFO: Waiting for pod downwardapi-volume-7758983a-1ee1-48ca-950b-df3d92e2f576 to disappear
    Sep  5 15:51:32.632: INFO: Pod downwardapi-volume-7758983a-1ee1-48ca-950b-df3d92e2f576 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:51:32.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-4319" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":560,"failed":0}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:51:36.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-142" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":25,"skipped":572,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:51:36.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-969" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":26,"skipped":603,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:51:37.042: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep  5 15:51:37.094: INFO: Waiting up to 5m0s for pod "downward-api-8847acf3-078b-4493-a73b-0cbbadff9507" in namespace "downward-api-1940" to be "Succeeded or Failed"

    Sep  5 15:51:37.098: INFO: Pod "downward-api-8847acf3-078b-4493-a73b-0cbbadff9507": Phase="Pending", Reason="", readiness=false. Elapsed: 4.170756ms
    Sep  5 15:51:39.103: INFO: Pod "downward-api-8847acf3-078b-4493-a73b-0cbbadff9507": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00949722s
    Sep  5 15:51:41.109: INFO: Pod "downward-api-8847acf3-078b-4493-a73b-0cbbadff9507": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015332571s
    STEP: Saw pod success
    Sep  5 15:51:41.109: INFO: Pod "downward-api-8847acf3-078b-4493-a73b-0cbbadff9507" satisfied condition "Succeeded or Failed"

    Sep  5 15:51:41.113: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-md-0-85nn6-54859d8bd4-vcz8p pod downward-api-8847acf3-078b-4493-a73b-0cbbadff9507 container dapi-container: <nil>
    STEP: delete the pod
    Sep  5 15:51:41.138: INFO: Waiting for pod downward-api-8847acf3-078b-4493-a73b-0cbbadff9507 to disappear
    Sep  5 15:51:41.143: INFO: Pod downward-api-8847acf3-078b-4493-a73b-0cbbadff9507 no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:51:41.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-1940" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":623,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:51:41.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-6486" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":28,"skipped":630,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:51:43.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-7923" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":44,"skipped":747,"failed":0}

    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:51:43.467: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename replicaset
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:51:53.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-444" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":45,"skipped":747,"failed":0}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:51:53.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-2784" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":-1,"completed":46,"skipped":760,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:52:16.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-3424" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":47,"skipped":765,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:52:16.924: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-90cffb24-4fb0-478f-9b87-32808d776361
    STEP: Creating a pod to test consume configMaps
    Sep  5 15:52:16.992: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dfe58b59-5205-4d53-b8d8-29e4dc730697" in namespace "projected-6034" to be "Succeeded or Failed"

    Sep  5 15:52:16.998: INFO: Pod "pod-projected-configmaps-dfe58b59-5205-4d53-b8d8-29e4dc730697": Phase="Pending", Reason="", readiness=false. Elapsed: 5.35082ms
    Sep  5 15:52:19.003: INFO: Pod "pod-projected-configmaps-dfe58b59-5205-4d53-b8d8-29e4dc730697": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01049769s
    Sep  5 15:52:21.009: INFO: Pod "pod-projected-configmaps-dfe58b59-5205-4d53-b8d8-29e4dc730697": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016016704s
    STEP: Saw pod success
    Sep  5 15:52:21.009: INFO: Pod "pod-projected-configmaps-dfe58b59-5205-4d53-b8d8-29e4dc730697" satisfied condition "Succeeded or Failed"

    Sep  5 15:52:21.012: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod pod-projected-configmaps-dfe58b59-5205-4d53-b8d8-29e4dc730697 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  5 15:52:21.030: INFO: Waiting for pod pod-projected-configmaps-dfe58b59-5205-4d53-b8d8-29e4dc730697 to disappear
    Sep  5 15:52:21.034: INFO: Pod pod-projected-configmaps-dfe58b59-5205-4d53-b8d8-29e4dc730697 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:52:21.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-6034" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":48,"skipped":791,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:52:21.086: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-42eb08a4-9be6-4c84-846d-2d2422f104ac
    STEP: Creating a pod to test consume secrets
    Sep  5 15:52:21.129: INFO: Waiting up to 5m0s for pod "pod-secrets-7158bf60-85da-4ae3-a388-ca150443c4ce" in namespace "secrets-6688" to be "Succeeded or Failed"

    Sep  5 15:52:21.133: INFO: Pod "pod-secrets-7158bf60-85da-4ae3-a388-ca150443c4ce": Phase="Pending", Reason="", readiness=false. Elapsed: 3.24578ms
    Sep  5 15:52:23.137: INFO: Pod "pod-secrets-7158bf60-85da-4ae3-a388-ca150443c4ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007971023s
    Sep  5 15:52:25.143: INFO: Pod "pod-secrets-7158bf60-85da-4ae3-a388-ca150443c4ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013678117s
    STEP: Saw pod success
    Sep  5 15:52:25.143: INFO: Pod "pod-secrets-7158bf60-85da-4ae3-a388-ca150443c4ce" satisfied condition "Succeeded or Failed"

    Sep  5 15:52:25.147: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod pod-secrets-7158bf60-85da-4ae3-a388-ca150443c4ce container secret-volume-test: <nil>
    STEP: delete the pod
    Sep  5 15:52:25.164: INFO: Waiting for pod pod-secrets-7158bf60-85da-4ae3-a388-ca150443c4ce to disappear
    Sep  5 15:52:25.168: INFO: Pod pod-secrets-7158bf60-85da-4ae3-a388-ca150443c4ce no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:52:25.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-6688" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":49,"skipped":812,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:52:30.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-8853" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":50,"skipped":857,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 34 lines ...
    &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88  deployment-6918  0b530825-1f88-4c59-b08f-769bd58454f1 10312 3 2022-09-05 15:52:34 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 562b66dd-1c7a-4b66-8291-2dbef1fe40a1 0xc00600b447 0xc00600b448}] []  [{kube-controller-manager Update apps/v1 2022-09-05 15:52:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"562b66dd-1c7a-4b66-8291-2dbef1fe40a1\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-09-05 15:52:34 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00600b4e8 <nil> ClusterFirst map[]   <nil>  false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
    Sep  5 15:52:36.590: INFO: All old ReplicaSets of Deployment "webserver-deployment":
    Sep  5 15:52:36.591: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-847dcfb7fb  deployment-6918  f0e7c680-3596-404d-b14c-c94831f04a8e 10309 3 2022-09-05 15:52:30 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 562b66dd-1c7a-4b66-8291-2dbef1fe40a1 0xc00600b547 0xc00600b548}] []  [{kube-controller-manager Update apps/v1 2022-09-05 15:52:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"562b66dd-1c7a-4b66-8291-2dbef1fe40a1\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-09-05 15:52:32 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [] []  []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00600b5d8 <nil> ClusterFirst map[]   <nil>  false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
    Sep  5 15:52:36.605: INFO: Pod "webserver-deployment-795d758f88-79lhc" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-79lhc webserver-deployment-795d758f88- deployment-6918  8c2496f7-194e-45de-aec4-b3cab3732418 10334 0 2022-09-05 15:52:36 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0b530825-1f88-4c59-b08f-769bd58454f1 0xc0043d7987 0xc0043d7988}] []  [{kube-controller-manager Update v1 2022-09-05 15:52:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0b530825-1f88-4c59-b08f-769bd58454f1\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-brsv7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-brsv7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep  5 15:52:36.605: INFO: Pod "webserver-deployment-795d758f88-8v56c" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-8v56c webserver-deployment-795d758f88- deployment-6918  be07ef84-b7b6-4034-ab95-e833d34a651c 10304 0 2022-09-05 15:52:34 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0b530825-1f88-4c59-b08f-769bd58454f1 0xc0043d7af7 0xc0043d7af8}] []  [{kube-controller-manager Update v1 2022-09-05 15:52:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0b530825-1f88-4c59-b08f-769bd58454f1\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-05 15:52:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.27\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-cszpk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cszpk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-rbkcco-worker-0xh5an,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 15:52:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 15:52:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 15:52:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 15:52:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.2.27,StartTime:2022-09-05 15:52:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.27,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep  5 15:52:36.605: INFO: Pod "webserver-deployment-795d758f88-9shrd" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-9shrd webserver-deployment-795d758f88- deployment-6918  fec4bdd0-4506-49a6-a61b-840d765db78a 10298 0 2022-09-05 15:52:34 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0b530825-1f88-4c59-b08f-769bd58454f1 0xc0043d7d00 0xc0043d7d01}] []  [{kube-controller-manager Update v1 2022-09-05 15:52:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0b530825-1f88-4c59-b08f-769bd58454f1\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-05 15:52:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.74\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-fqtwd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fqtwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-rbkcco-worker-c48v4q,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 15:52:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 15:52:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 15:52:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 15:52:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.74,StartTime:2022-09-05 15:52:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.74,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep  5 15:52:36.606: INFO: Pod "webserver-deployment-795d758f88-mvflm" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-mvflm webserver-deployment-795d758f88- deployment-6918  524962aa-9609-4dfc-a49b-7df8845cdb3b 10324 0 2022-09-05 15:52:36 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0b530825-1f88-4c59-b08f-769bd58454f1 0xc00450c0f0 0xc00450c0f1}] []  [{kube-controller-manager Update v1 2022-09-05 15:52:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0b530825-1f88-4c59-b08f-769bd58454f1\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mrvgk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mrvgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep  5 15:52:36.606: INFO: Pod "webserver-deployment-795d758f88-pmjj6" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-pmjj6 webserver-deployment-795d758f88- deployment-6918  96da1f8c-89c6-4133-8016-14c06b47e3e4 10331 0 2022-09-05 15:52:36 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0b530825-1f88-4c59-b08f-769bd58454f1 0xc00450c5d7 0xc00450c5d8}] []  [{kube-controller-manager Update v1 2022-09-05 15:52:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0b530825-1f88-4c59-b08f-769bd58454f1\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-klcvx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-klcvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep  5 15:52:36.606: INFO: Pod "webserver-deployment-795d758f88-wvqlm" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-wvqlm webserver-deployment-795d758f88- deployment-6918  3b6f89b0-0d98-4226-ad12-63de9458354a 10301 0 2022-09-05 15:52:34 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0b530825-1f88-4c59-b08f-769bd58454f1 0xc00450c917 0xc00450c918}] []  [{kube-controller-manager Update v1 2022-09-05 15:52:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0b530825-1f88-4c59-b08f-769bd58454f1\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-05 15:52:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.32\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-lzj58,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lzj58,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-rbkcco-md-0-85nn6-54859d8bd4-vcz8p,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 15:52:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 15:52:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 15:52:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 15:52:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.32,StartTime:2022-09-05 15:52:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.32,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep  5 15:52:36.607: INFO: Pod "webserver-deployment-795d758f88-ww84w" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-ww84w webserver-deployment-795d758f88- deployment-6918  764139a8-5399-4c16-9637-ed8434edf23d 10307 0 2022-09-05 15:52:34 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0b530825-1f88-4c59-b08f-769bd58454f1 0xc00450d0b0 0xc00450d0b1}] []  [{kube-controller-manager Update v1 2022-09-05 15:52:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0b530825-1f88-4c59-b08f-769bd58454f1\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-05 15:52:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.33\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-p48qh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p48qh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-rbkcco-md-0-85nn6-54859d8bd4-grv26,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 15:52:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 15:52:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 15:52:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 15:52:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.1.33,StartTime:2022-09-05 15:52:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.33,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep  5 15:52:36.607: INFO: Pod "webserver-deployment-795d758f88-x5c88" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-x5c88 webserver-deployment-795d758f88- deployment-6918  18325468-286e-4bcb-93ae-fcb13d0030a7 10295 0 2022-09-05 15:52:34 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0b530825-1f88-4c59-b08f-769bd58454f1 0xc00450d370 0xc00450d371}] []  [{kube-controller-manager Update v1 2022-09-05 15:52:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0b530825-1f88-4c59-b08f-769bd58454f1\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-09-05 15:52:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.75\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gf4j6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gf4j6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-rbkcco-worker-c48v4q,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 15:52:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 15:52:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 15:52:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 15:52:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.75,StartTime:2022-09-05 15:52:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.75,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep  5 15:52:36.607: INFO: Pod "webserver-deployment-795d758f88-xxdh5" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-xxdh5 webserver-deployment-795d758f88- deployment-6918  589ad0a1-d36c-4fdd-931c-5c7bc4398837 10330 0 2022-09-05 15:52:36 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0b530825-1f88-4c59-b08f-769bd58454f1 0xc00450d8c0 0xc00450d8c1}] []  [{kube-controller-manager Update v1 2022-09-05 15:52:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0b530825-1f88-4c59-b08f-769bd58454f1\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rrhdp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rrhdp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-rbkcco-md-0-85nn6-54859d8bd4-vcz8p,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 15:52:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep  5 15:52:36.608: INFO: Pod "webserver-deployment-847dcfb7fb-2567p" is not available:
    &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-2567p webserver-deployment-847dcfb7fb- deployment-6918  63d1edc4-95f7-4317-aacc-2e38fa07f54f 10328 0 2022-09-05 15:52:36 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb f0e7c680-3596-404d-b14c-c94831f04a8e 0xc00450dc90 0xc00450dc91}] []  [{kube-controller-manager Update v1 2022-09-05 15:52:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f0e7c680-3596-404d-b14c-c94831f04a8e\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-925bk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-925bk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep  5 15:52:36.608: INFO: Pod "webserver-deployment-847dcfb7fb-2xr5s" is not available:
    &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-2xr5s webserver-deployment-847dcfb7fb- deployment-6918  8efb6885-dab4-4da7-a572-939add335f3d 10329 0 2022-09-05 15:52:36 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb f0e7c680-3596-404d-b14c-c94831f04a8e 0xc004f26017 0xc004f26018}] []  [{kube-controller-manager Update v1 2022-09-05 15:52:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f0e7c680-3596-404d-b14c-c94831f04a8e\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-kqkhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kqkhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-rbkcco-md-0-85nn6-54859d8bd4-vcz8p,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 15:52:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
... skipping 27 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:52:36.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-6918" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":51,"skipped":910,"failed":0}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:52:37.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-87" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":52,"skipped":923,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 62 lines ...
    STEP: Destroying namespace "services-691" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":53,"skipped":933,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 36 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:53:18.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-7423" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":54,"skipped":957,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
    STEP: creating replication controller affinity-clusterip in namespace services-2303
    I0905 15:51:28.704117      18 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-2303, replica count: 3
    I0905 15:51:31.755161      18 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
    Sep  5 15:51:31.768: INFO: Creating new exec pod
    Sep  5 15:51:36.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:51:39.007: INFO: rc: 1
    Sep  5 15:51:39.007: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:51:40.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:51:42.192: INFO: rc: 1
    Sep  5 15:51:42.192: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:51:43.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:51:45.214: INFO: rc: 1
    Sep  5 15:51:45.214: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:51:46.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:51:48.233: INFO: rc: 1
    Sep  5 15:51:48.233: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:51:49.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:51:51.180: INFO: rc: 1
    Sep  5 15:51:51.180: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:51:52.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:51:54.189: INFO: rc: 1
    Sep  5 15:51:54.189: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:51:55.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:51:57.238: INFO: rc: 1
    Sep  5 15:51:57.238: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + + nc -v -t -w 2 affinity-clusterip 80
    echo hostName
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:51:58.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:52:00.203: INFO: rc: 1
    Sep  5 15:52:00.203: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:52:01.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:52:03.187: INFO: rc: 1
    Sep  5 15:52:03.187: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:52:04.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:52:06.186: INFO: rc: 1
    Sep  5 15:52:06.186: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:52:07.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:52:09.194: INFO: rc: 1
    Sep  5 15:52:09.195: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:52:10.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:52:12.231: INFO: rc: 1
    Sep  5 15:52:12.231: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:52:13.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:52:15.177: INFO: rc: 1
    Sep  5 15:52:15.177: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:52:16.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:52:18.201: INFO: rc: 1
    Sep  5 15:52:18.201: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:52:19.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:52:21.182: INFO: rc: 1
    Sep  5 15:52:21.182: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:52:22.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:52:24.207: INFO: rc: 1
    Sep  5 15:52:24.207: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:52:25.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:52:27.182: INFO: rc: 1
    Sep  5 15:52:27.182: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:52:28.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:52:30.158: INFO: rc: 1
    Sep  5 15:52:30.158: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:52:31.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:52:33.427: INFO: rc: 1
    Sep  5 15:52:33.427: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:52:34.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:52:36.203: INFO: rc: 1
    Sep  5 15:52:36.203: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:52:37.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:52:39.231: INFO: rc: 1
    Sep  5 15:52:39.231: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:52:40.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:52:42.195: INFO: rc: 1
    Sep  5 15:52:42.195: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:52:43.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:52:45.180: INFO: rc: 1
    Sep  5 15:52:45.180: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:52:46.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:52:48.264: INFO: rc: 1
    Sep  5 15:52:48.264: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:52:49.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:52:51.170: INFO: rc: 1
    Sep  5 15:52:51.170: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:52:52.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:52:54.164: INFO: rc: 1
    Sep  5 15:52:54.164: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:52:55.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:52:57.168: INFO: rc: 1
    Sep  5 15:52:57.168: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:52:58.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:53:00.189: INFO: rc: 1
    Sep  5 15:53:00.189: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:53:01.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:53:03.171: INFO: rc: 1
    Sep  5 15:53:03.171: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:53:04.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:53:06.186: INFO: rc: 1
    Sep  5 15:53:06.186: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:53:07.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:53:09.185: INFO: rc: 1
    Sep  5 15:53:09.185: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:53:10.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:53:12.201: INFO: rc: 1
    Sep  5 15:53:12.201: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + nc -v -t -w+  2 affinity-clusterip 80
    echo hostName
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:53:13.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:53:15.194: INFO: rc: 1
    Sep  5 15:53:15.194: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:53:16.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:53:18.177: INFO: rc: 1
    Sep  5 15:53:18.177: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:53:19.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:53:21.174: INFO: rc: 1
    Sep  5 15:53:21.174: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:53:22.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:53:24.197: INFO: rc: 1
    Sep  5 15:53:24.197: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:53:25.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:53:27.195: INFO: rc: 1
    Sep  5 15:53:27.195: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:53:28.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:53:30.178: INFO: rc: 1
    Sep  5 15:53:30.178: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:53:31.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:53:33.176: INFO: rc: 1
    Sep  5 15:53:33.176: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:53:34.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:53:36.198: INFO: rc: 1
    Sep  5 15:53:36.198: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:53:37.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:53:39.185: INFO: rc: 1
    Sep  5 15:53:39.185: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:53:39.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep  5 15:53:41.341: INFO: rc: 1
    Sep  5 15:53:41.341: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2303 exec execpod-affinityljq5g -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  5 15:53:41.342: FAIL: Unexpected error:

        <*errors.errorString | 0xc0045f4190>: {
            s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol",
        }
        service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol
    occurred
    
... skipping 27 lines ...
    • Failure [135.279 seconds]
    [sig-network] Services
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
      should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  5 15:53:41.342: Unexpected error:

          <*errors.errorString | 0xc0045f4190>: {
              s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol",
          }
          service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3278
    ------------------------------
    {"msg":"FAILED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":30,"skipped":473,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:53:43.855: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename services
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 42 lines ...
    STEP: Destroying namespace "services-5462" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":31,"skipped":473,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:53:52.841: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-9b05d041-3a9b-4da5-8a8c-f1bfb7aa69ae
    STEP: Creating a pod to test consume configMaps
    Sep  5 15:53:52.908: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ad446eb5-e192-4cf9-99d3-6c7c22a3ac9f" in namespace "projected-592" to be "Succeeded or Failed"

    Sep  5 15:53:52.912: INFO: Pod "pod-projected-configmaps-ad446eb5-e192-4cf9-99d3-6c7c22a3ac9f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.630374ms
    Sep  5 15:53:54.918: INFO: Pod "pod-projected-configmaps-ad446eb5-e192-4cf9-99d3-6c7c22a3ac9f": Phase="Running", Reason="", readiness=false. Elapsed: 2.009514996s
    Sep  5 15:53:56.923: INFO: Pod "pod-projected-configmaps-ad446eb5-e192-4cf9-99d3-6c7c22a3ac9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014639953s
    STEP: Saw pod success
    Sep  5 15:53:56.924: INFO: Pod "pod-projected-configmaps-ad446eb5-e192-4cf9-99d3-6c7c22a3ac9f" satisfied condition "Succeeded or Failed"

    Sep  5 15:53:56.927: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-0xh5an pod pod-projected-configmaps-ad446eb5-e192-4cf9-99d3-6c7c22a3ac9f container projected-configmap-volume-test: <nil>
    STEP: delete the pod
    Sep  5 15:53:56.957: INFO: Waiting for pod pod-projected-configmaps-ad446eb5-e192-4cf9-99d3-6c7c22a3ac9f to disappear
    Sep  5 15:53:56.960: INFO: Pod pod-projected-configmaps-ad446eb5-e192-4cf9-99d3-6c7c22a3ac9f no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:53:56.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-592" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":487,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] Projected combined
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-projected-all-test-volume-353df36b-4ced-4849-9870-caf9afa2816a
    STEP: Creating secret with name secret-projected-all-test-volume-c7b6188d-206e-4c7c-a250-819b36f90e08
    STEP: Creating a pod to test Check all projections for projected volume plugin
    Sep  5 15:53:57.034: INFO: Waiting up to 5m0s for pod "projected-volume-be404619-f961-4fa4-a71c-b92d2aadce59" in namespace "projected-1670" to be "Succeeded or Failed"

    Sep  5 15:53:57.038: INFO: Pod "projected-volume-be404619-f961-4fa4-a71c-b92d2aadce59": Phase="Pending", Reason="", readiness=false. Elapsed: 3.328344ms
    Sep  5 15:53:59.043: INFO: Pod "projected-volume-be404619-f961-4fa4-a71c-b92d2aadce59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008351666s
    Sep  5 15:54:01.047: INFO: Pod "projected-volume-be404619-f961-4fa4-a71c-b92d2aadce59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01292327s
    STEP: Saw pod success
    Sep  5 15:54:01.047: INFO: Pod "projected-volume-be404619-f961-4fa4-a71c-b92d2aadce59" satisfied condition "Succeeded or Failed"

    Sep  5 15:54:01.052: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod projected-volume-be404619-f961-4fa4-a71c-b92d2aadce59 container projected-all-volume-test: <nil>
    STEP: delete the pod
    Sep  5 15:54:01.079: INFO: Waiting for pod projected-volume-be404619-f961-4fa4-a71c-b92d2aadce59 to disappear
    Sep  5 15:54:01.083: INFO: Pod projected-volume-be404619-f961-4fa4-a71c-b92d2aadce59 no longer exists
    [AfterEach] [sig-storage] Projected combined
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:54:01.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-1670" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":488,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 49 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:54:49.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-3932" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":55,"skipped":964,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:54:56.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-4369" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":56,"skipped":988,"failed":0}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:54:56.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-5125" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":57,"skipped":1004,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:54:56.211: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-map-fd475ae1-8802-48e0-ac0e-a27b4c56c827
    STEP: Creating a pod to test consume secrets
    Sep  5 15:54:56.261: INFO: Waiting up to 5m0s for pod "pod-secrets-ca465032-7908-4e42-b3e0-21392fa46e57" in namespace "secrets-4791" to be "Succeeded or Failed"

    Sep  5 15:54:56.269: INFO: Pod "pod-secrets-ca465032-7908-4e42-b3e0-21392fa46e57": Phase="Pending", Reason="", readiness=false. Elapsed: 7.644801ms
    Sep  5 15:54:58.273: INFO: Pod "pod-secrets-ca465032-7908-4e42-b3e0-21392fa46e57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01178542s
    Sep  5 15:55:00.278: INFO: Pod "pod-secrets-ca465032-7908-4e42-b3e0-21392fa46e57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016454657s
    STEP: Saw pod success
    Sep  5 15:55:00.278: INFO: Pod "pod-secrets-ca465032-7908-4e42-b3e0-21392fa46e57" satisfied condition "Succeeded or Failed"

    Sep  5 15:55:00.281: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-md-0-85nn6-54859d8bd4-vcz8p pod pod-secrets-ca465032-7908-4e42-b3e0-21392fa46e57 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep  5 15:55:00.307: INFO: Waiting for pod pod-secrets-ca465032-7908-4e42-b3e0-21392fa46e57 to disappear
    Sep  5 15:55:00.309: INFO: Pod pod-secrets-ca465032-7908-4e42-b3e0-21392fa46e57 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:55:00.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-4791" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":58,"skipped":1007,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Events
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:55:06.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-1008" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":-1,"completed":59,"skipped":1014,"failed":0}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] version v1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 39 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:55:08.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "proxy-1427" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":60,"skipped":1031,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":3,"skipped":50,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]"]}

    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:50:07.451: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename dns
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 6 lines ...
    
    STEP: creating a pod to probe DNS
    STEP: submitting the pod to kubernetes
    STEP: retrieving the pod
    STEP: looking for the results for each expected name from probers
    Sep  5 15:53:42.577: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-6912.svc.cluster.local from pod dns-6912/dns-test-f30ed6a7-519c-4ed8-ad79-fa37dfefd7fe: the server is currently unable to handle the request (get pods dns-test-f30ed6a7-519c-4ed8-ad79-fa37dfefd7fe)
    Sep  5 15:55:09.525: FAIL: Unable to read jessie_udp@dns-test-service-3.dns-6912.svc.cluster.local from pod dns-6912/dns-test-f30ed6a7-519c-4ed8-ad79-fa37dfefd7fe: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-6912/pods/dns-test-f30ed6a7-519c-4ed8-ad79-fa37dfefd7fe/proxy/results/jessie_udp@dns-test-service-3.dns-6912.svc.cluster.local": context deadline exceeded

    
    Full Stack Trace
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x78de4a8, 0xc00011c010, 0x7f85c239b878, 0x18, 0xc001e70360)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x78de4a8, 0xc00011c010, 0xc00432c8c0, 0x2a14500, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f
... skipping 15 lines ...
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b
    testing.tRunner(0xc000c4ac00, 0x729a2d8)
    	/usr/local/go/src/testing/testing.go:1203 +0xe5
    created by testing.(*T).Run
    	/usr/local/go/src/testing/testing.go:1248 +0x2b3
    E0905 15:55:09.526276      17 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Sep  5 15:55:09.525: Unable to read jessie_udp@dns-test-service-3.dns-6912.svc.cluster.local from pod dns-6912/dns-test-f30ed6a7-519c-4ed8-ad79-fa37dfefd7fe: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-6912/pods/dns-test-f30ed6a7-519c-4ed8-ad79-fa37dfefd7fe/proxy/results/jessie_udp@dns-test-service-3.dns-6912.svc.cluster.local\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:217, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x78de4a8, 0xc00011c010, 0x7f85c239b878, 0x18, 0xc001e70360)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x78de4a8, 0xc00011c010, 0xc00432c8c0, 0x2a14500, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll(0x78de4a8, 0xc00011c010, 0xc001e70301, 0xc001e70360, 0xc00432c8c0, 0x6826620, 0xc00432c8c0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:577 +0xe5\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext(0x78de4a8, 0xc00011c010, 0x12a05f200, 0x8bb2c97000, 0xc00432c8c0, 0x6d6e4e0, 0x2521201)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:523 +0x66\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc0023a2e00, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:509 +0x6f\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc003a433c0, 0x2, 0x2, 0x702fe9b, 0x7, 0xc000588800, 0x7971668, 0xc003418840, 0x1, 0x70515b7, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463 +0x13c\nk8s.io/kubernetes/test/e2e/network.validateTargetedProbeOutput(0xc0012271e0, 0xc000588800, 0xc003a433c0, 0x2, 0x2, 0x70515b7, 0x10)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:548 +0x376\nk8s.io/kubernetes/test/e2e/network.glob..func2.9()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:354 +0x6ed\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc000c4ac00)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc000c4ac00)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b\ntesting.tRunner(0xc000c4ac00, 0x729a2d8)\n\t/usr/local/go/src/testing/testing.go:1203 +0xe5\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1248 +0x2b3"} (
    Your test failed.

    Ginkgo panics to prevent subsequent assertions from running.
    Normally Ginkgo rescues this panic so you shouldn't see it.
    
    But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
    To circumvent this, you should call
    
... skipping 5 lines ...
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6bbe4c0, 0xc004a42100)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
    panic(0x6bbe4c0, 0xc004a42100)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc000b76180, 0x16b, 0x88abe86, 0x7d, 0xd9, 0xc0005e2a80, 0x9fe)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
    panic(0x62ef260, 0x77956f0)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc000b76180, 0x16b, 0xc003ffde88, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:260 +0xc8
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc000b76180, 0x16b, 0xc003ffdf70, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
    k8s.io/kubernetes/test/e2e/framework.Failf(0x70d3e4f, 0x24, 0xc003ffe1d0, 0x4, 0x4)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
    k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:480 +0xab1
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x78de4a8, 0xc00011c010, 0x7f85c239b878, 0x18, 0xc001e70360)
... skipping 101 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:55:09.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-5064" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":61,"skipped":1096,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
    STEP: Destroying namespace "crd-webhook-3414" for this suite.
    [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":62,"skipped":1107,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:55:20.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-1580" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":63,"skipped":1127,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  5 15:55:20.812: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1faafb86-e3eb-4c9b-9df4-cf023f27d9bb" in namespace "projected-3883" to be "Succeeded or Failed"

    Sep  5 15:55:20.815: INFO: Pod "downwardapi-volume-1faafb86-e3eb-4c9b-9df4-cf023f27d9bb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.227227ms
    Sep  5 15:55:22.821: INFO: Pod "downwardapi-volume-1faafb86-e3eb-4c9b-9df4-cf023f27d9bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009356529s
    Sep  5 15:55:24.827: INFO: Pod "downwardapi-volume-1faafb86-e3eb-4c9b-9df4-cf023f27d9bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015049169s
    STEP: Saw pod success
    Sep  5 15:55:24.827: INFO: Pod "downwardapi-volume-1faafb86-e3eb-4c9b-9df4-cf023f27d9bb" satisfied condition "Succeeded or Failed"

    Sep  5 15:55:24.831: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-md-0-85nn6-54859d8bd4-vcz8p pod downwardapi-volume-1faafb86-e3eb-4c9b-9df4-cf023f27d9bb container client-container: <nil>
    STEP: delete the pod
    Sep  5 15:55:24.852: INFO: Waiting for pod downwardapi-volume-1faafb86-e3eb-4c9b-9df4-cf023f27d9bb to disappear
    Sep  5 15:55:24.856: INFO: Pod downwardapi-volume-1faafb86-e3eb-4c9b-9df4-cf023f27d9bb no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:55:24.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-3883" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":64,"skipped":1128,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] CronJob
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
    • [SLOW TEST:300.081 seconds]
    [sig-apps] CronJob
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
      should not schedule jobs when suspended [Slow] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":-1,"completed":29,"skipped":635,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:56:45.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-757" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":787,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
    STEP: Destroying namespace "crd-webhook-9063" for this suite.
    [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":31,"skipped":821,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:56:52.347: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-a745bb53-320e-4253-aa92-c88eba42db20
    STEP: Creating a pod to test consume secrets
    Sep  5 15:56:52.423: INFO: Waiting up to 5m0s for pod "pod-secrets-99b52da9-7394-4126-b9e1-c64092d934e8" in namespace "secrets-5014" to be "Succeeded or Failed"

    Sep  5 15:56:52.428: INFO: Pod "pod-secrets-99b52da9-7394-4126-b9e1-c64092d934e8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.697585ms
    Sep  5 15:56:54.434: INFO: Pod "pod-secrets-99b52da9-7394-4126-b9e1-c64092d934e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011292753s
    Sep  5 15:56:56.439: INFO: Pod "pod-secrets-99b52da9-7394-4126-b9e1-c64092d934e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016140748s
    STEP: Saw pod success
    Sep  5 15:56:56.439: INFO: Pod "pod-secrets-99b52da9-7394-4126-b9e1-c64092d934e8" satisfied condition "Succeeded or Failed"

    Sep  5 15:56:56.445: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-md-0-85nn6-54859d8bd4-grv26 pod pod-secrets-99b52da9-7394-4126-b9e1-c64092d934e8 container secret-env-test: <nil>
    STEP: delete the pod
    Sep  5 15:56:56.483: INFO: Waiting for pod pod-secrets-99b52da9-7394-4126-b9e1-c64092d934e8 to disappear
    Sep  5 15:56:56.486: INFO: Pod pod-secrets-99b52da9-7394-4126-b9e1-c64092d934e8 no longer exists
    [AfterEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:56:56.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-5014" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":842,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
    STEP: updating the pod
    Sep  5 15:56:59.107: INFO: Successfully updated pod "pod-update-activedeadlineseconds-bd3c59eb-7c9e-4842-98fd-80eb96beff9b"
    Sep  5 15:56:59.107: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-bd3c59eb-7c9e-4842-98fd-80eb96beff9b" in namespace "pods-2314" to be "terminated due to deadline exceeded"
    Sep  5 15:56:59.111: INFO: Pod "pod-update-activedeadlineseconds-bd3c59eb-7c9e-4842-98fd-80eb96beff9b": Phase="Running", Reason="", readiness=true. Elapsed: 4.064558ms
    Sep  5 15:57:01.117: INFO: Pod "pod-update-activedeadlineseconds-bd3c59eb-7c9e-4842-98fd-80eb96beff9b": Phase="Running", Reason="", readiness=true. Elapsed: 2.010302545s
    Sep  5 15:57:03.124: INFO: Pod "pod-update-activedeadlineseconds-bd3c59eb-7c9e-4842-98fd-80eb96beff9b": Phase="Running", Reason="", readiness=false. Elapsed: 4.017713295s
    Sep  5 15:57:05.130: INFO: Pod "pod-update-activedeadlineseconds-bd3c59eb-7c9e-4842-98fd-80eb96beff9b": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 6.022974493s

    Sep  5 15:57:05.130: INFO: Pod "pod-update-activedeadlineseconds-bd3c59eb-7c9e-4842-98fd-80eb96beff9b" satisfied condition "terminated due to deadline exceeded"
    [AfterEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:57:05.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-2314" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":847,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:57:07.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-335" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":34,"skipped":852,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:57:15.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-522" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":35,"skipped":874,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide podname only [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  5 15:57:15.215: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2ff1ef38-8ed0-46a0-ae8a-d84e6537bc43" in namespace "downward-api-1496" to be "Succeeded or Failed"

    Sep  5 15:57:15.219: INFO: Pod "downwardapi-volume-2ff1ef38-8ed0-46a0-ae8a-d84e6537bc43": Phase="Pending", Reason="", readiness=false. Elapsed: 3.857348ms
    Sep  5 15:57:17.224: INFO: Pod "downwardapi-volume-2ff1ef38-8ed0-46a0-ae8a-d84e6537bc43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009056668s
    Sep  5 15:57:19.230: INFO: Pod "downwardapi-volume-2ff1ef38-8ed0-46a0-ae8a-d84e6537bc43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015056924s
    STEP: Saw pod success
    Sep  5 15:57:19.230: INFO: Pod "downwardapi-volume-2ff1ef38-8ed0-46a0-ae8a-d84e6537bc43" satisfied condition "Succeeded or Failed"

    Sep  5 15:57:19.234: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-0xh5an pod downwardapi-volume-2ff1ef38-8ed0-46a0-ae8a-d84e6537bc43 container client-container: <nil>
    STEP: delete the pod
    Sep  5 15:57:19.255: INFO: Waiting for pod downwardapi-volume-2ff1ef38-8ed0-46a0-ae8a-d84e6537bc43 to disappear
    Sep  5 15:57:19.259: INFO: Pod downwardapi-volume-2ff1ef38-8ed0-46a0-ae8a-d84e6537bc43 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:57:19.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-1496" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":878,"failed":0}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
    STEP: Create set of pods
    Sep  5 15:57:19.342: INFO: created test-pod-1
    Sep  5 15:57:19.350: INFO: created test-pod-2
    Sep  5 15:57:19.360: INFO: created test-pod-3
    STEP: waiting for all 3 pods to be running
    Sep  5 15:57:19.361: INFO: Waiting up to 5m0s for all pods (need at least 3) in namespace 'pods-5609' to be running and ready
    Sep  5 15:57:19.391: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:57:19.391: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:57:19.391: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  5 15:57:19.391: INFO: 0 / 3 pods in namespace 'pods-5609' are running and ready (0 seconds elapsed)
    Sep  5 15:57:19.391: INFO: expected 0 pod replicas in namespace 'pods-5609', 0 are Running and Ready.
    Sep  5 15:57:19.391: INFO: POD         NODE                                                            PHASE    GRACE  CONDITIONS
    Sep  5 15:57:19.391: INFO: test-pod-1  k8s-upgrade-and-conformance-rbkcco-md-0-85nn6-54859d8bd4-grv26  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:57:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:57:19 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:57:19 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:57:19 +0000 UTC  }]
    Sep  5 15:57:19.392: INFO: test-pod-2  k8s-upgrade-and-conformance-rbkcco-worker-0xh5an                Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:57:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:57:19 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:57:19 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:57:19 +0000 UTC  }]
    Sep  5 15:57:19.392: INFO: test-pod-3  k8s-upgrade-and-conformance-rbkcco-md-0-85nn6-54859d8bd4-grv26  Pending         [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-05 15:57:19 +0000 UTC  }]
... skipping 7 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:57:23.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-5609" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":37,"skipped":892,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  5 15:57:23.492: INFO: Waiting up to 5m0s for pod "downwardapi-volume-42bfd0c3-5662-44f9-a0a5-29b23718f673" in namespace "downward-api-6920" to be "Succeeded or Failed"

    Sep  5 15:57:23.495: INFO: Pod "downwardapi-volume-42bfd0c3-5662-44f9-a0a5-29b23718f673": Phase="Pending", Reason="", readiness=false. Elapsed: 3.659367ms
    Sep  5 15:57:25.504: INFO: Pod "downwardapi-volume-42bfd0c3-5662-44f9-a0a5-29b23718f673": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012396005s
    Sep  5 15:57:27.510: INFO: Pod "downwardapi-volume-42bfd0c3-5662-44f9-a0a5-29b23718f673": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017757603s
    STEP: Saw pod success
    Sep  5 15:57:27.510: INFO: Pod "downwardapi-volume-42bfd0c3-5662-44f9-a0a5-29b23718f673" satisfied condition "Succeeded or Failed"

    Sep  5 15:57:27.514: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-md-0-85nn6-54859d8bd4-grv26 pod downwardapi-volume-42bfd0c3-5662-44f9-a0a5-29b23718f673 container client-container: <nil>
    STEP: delete the pod
    Sep  5 15:57:27.532: INFO: Waiting for pod downwardapi-volume-42bfd0c3-5662-44f9-a0a5-29b23718f673 to disappear
    Sep  5 15:57:27.536: INFO: Pod downwardapi-volume-42bfd0c3-5662-44f9-a0a5-29b23718f673 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:57:27.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-6920" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":893,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
    • [SLOW TEST:242.703 seconds]
    [sig-node] Probing container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":504,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:58:06.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-3215" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":529,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 101 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:58:39.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-5030" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":-1,"completed":39,"skipped":894,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:58:06.487: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename svcaccounts
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep  5 15:58:06.549: INFO: created pod
    Sep  5 15:58:06.549: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-1172" to be "Succeeded or Failed"

    Sep  5 15:58:06.552: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 3.434544ms
    Sep  5 15:58:08.559: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010238147s
    Sep  5 15:58:10.565: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016555646s
    STEP: Saw pod success
    Sep  5 15:58:10.565: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed"

    Sep  5 15:58:40.568: INFO: polling logs
    Sep  5 15:58:40.577: INFO: Pod logs: 
    I0905 15:58:07.242164       1 log.go:195] OK: Got token
    I0905 15:58:07.242339       1 log.go:195] validating with in-cluster discovery
    I0905 15:58:07.242840       1 log.go:195] OK: got issuer https://kubernetes.default.svc.cluster.local
    I0905 15:58:07.242890       1 log.go:195] Full, not-validated claims: 
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:58:40.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-1172" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":36,"skipped":532,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
    Sep  5 15:58:42.926: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
    Sep  5 15:58:42.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4125 describe pod agnhost-primary-d2k6g'
    Sep  5 15:58:43.029: INFO: stderr: ""
    Sep  5 15:58:43.029: INFO: stdout: "Name:         agnhost-primary-d2k6g\nNamespace:    kubectl-4125\nPriority:     0\nNode:         k8s-upgrade-and-conformance-rbkcco-md-0-85nn6-54859d8bd4-grv26/172.18.0.7\nStart Time:   Mon, 05 Sep 2022 15:58:41 +0000\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nStatus:       Running\nIP:           192.168.1.50\nIPs:\n  IP:           192.168.1.50\nControlled By:  ReplicationController/agnhost-primary\nContainers:\n  agnhost-primary:\n    Container ID:   containerd://d8b915531ab84070af1456f79e8de21dba2c116082751d30133bc7fa87438c9c\n    Image:          k8s.gcr.io/e2e-test-images/agnhost:2.39\n    Image ID:       k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Mon, 05 Sep 2022 15:58:42 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zb6hr (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-zb6hr:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       <nil>\n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              <none>\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  1s    default-scheduler  Successfully assigned kubectl-4125/agnhost-primary-d2k6g to k8s-upgrade-and-conformance-rbkcco-md-0-85nn6-54859d8bd4-grv26\n  Normal  Pulled     1s    kubelet            Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n  Normal  Created    1s    kubelet            Created container agnhost-primary\n  Normal  Started    1s    kubelet            Started container agnhost-primary\n"
    Sep  5 15:58:43.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4125 describe rc agnhost-primary'
    Sep  5 15:58:43.130: INFO: stderr: ""
    Sep  5 15:58:43.130: INFO: stdout: "Name:         agnhost-primary\nNamespace:    kubectl-4125\nSelector:     app=agnhost,role=primary\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=primary\n  Containers:\n   agnhost-primary:\n    Image:        k8s.gcr.io/e2e-test-images/agnhost:2.39\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  2s    replication-controller  Created pod: agnhost-primary-d2k6g\n"

    Sep  5 15:58:43.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4125 describe service agnhost-primary'
    Sep  5 15:58:43.228: INFO: stderr: ""
    Sep  5 15:58:43.228: INFO: stdout: "Name:              agnhost-primary\nNamespace:         kubectl-4125\nLabels:            app=agnhost\n                   role=primary\nAnnotations:       <none>\nSelector:          app=agnhost,role=primary\nType:              ClusterIP\nIP Family Policy:  SingleStack\nIP Families:       IPv4\nIP:                10.138.51.141\nIPs:               10.138.51.141\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         192.168.1.50:6379\nSession Affinity:  None\nEvents:            <none>\n"
    Sep  5 15:58:43.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4125 describe node k8s-upgrade-and-conformance-rbkcco-6qgfw-5qghq'
    Sep  5 15:58:43.369: INFO: stderr: ""
    Sep  5 15:58:43.369: INFO: stdout: "Name:               k8s-upgrade-and-conformance-rbkcco-6qgfw-5qghq\nRoles:              control-plane,master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=k8s-upgrade-and-conformance-rbkcco-6qgfw-5qghq\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/control-plane=\n                    node-role.kubernetes.io/master=\n                    node.kubernetes.io/exclude-from-external-load-balancers=\nAnnotations:        cluster.x-k8s.io/cluster-name: k8s-upgrade-and-conformance-rbkcco\n                    cluster.x-k8s.io/cluster-namespace: k8s-upgrade-and-conformance-x40jpj\n                    cluster.x-k8s.io/machine: k8s-upgrade-and-conformance-rbkcco-6qgfw-5qghq\n                    cluster.x-k8s.io/owner-kind: KubeadmControlPlane\n                    cluster.x-k8s.io/owner-name: k8s-upgrade-and-conformance-rbkcco-6qgfw\n                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Mon, 05 Sep 2022 15:38:29 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  k8s-upgrade-and-conformance-rbkcco-6qgfw-5qghq\n  AcquireTime:     <unset>\n  RenewTime:       Mon, 05 Sep 2022 15:58:41 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Mon, 05 Sep 2022 15:54:24 +0000   Mon, 05 Sep 2022 15:38:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Mon, 05 Sep 2022 15:54:24 +0000   Mon, 05 Sep 2022 15:38:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Mon, 05 Sep 2022 15:54:24 +0000   Mon, 05 Sep 2022 15:38:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Mon, 05 Sep 2022 15:54:24 +0000   Mon, 05 Sep 2022 15:39:20 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.9\n  Hostname:    k8s-upgrade-and-conformance-rbkcco-6qgfw-5qghq\nCapacity:\n  cpu:                8\n  ephemeral-storage:  253882800Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             65860680Ki\n  pods:               110\nAllocatable:\n  cpu:                8\n  ephemeral-storage:  253882800Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             65860680Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 78ff6e1811ab4c3aa8e44ac4ef9a9eca\n  System UUID:                f8bfd6be-ea18-4ada-9e21-8c8cbda948b3\n  Boot ID:                    4f3eba65-535c-493c-80c4-8fd5575d7810\n  Kernel Version:             5.4.0-1072-gke\n  OS Image:                   Ubuntu 22.04.1 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.6.7\n  Kubelet Version:            v1.22.13\n  Kube-Proxy Version:         v1.22.13\nPodCIDR:                      192.168.5.0/24\nPodCIDRs:                     192.168.5.0/24\nProviderID:                   docker:////k8s-upgrade-and-conformance-rbkcco-6qgfw-5qghq\nNon-terminated Pods:          (6 in total)\n  Namespace                   Name                                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age\n  ---------                   ----                                                                      ------------  ----------  ---------------  -------------  ---\n  kube-system                 etcd-k8s-upgrade-and-conformance-rbkcco-6qgfw-5qghq                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         20m\n  kube-system                 kindnet-qgv98                                                             100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      20m\n  kube-system                 kube-apiserver-k8s-upgrade-and-conformance-rbkcco-6qgfw-5qghq             250m (3%)     0 (0%)      0 (0%)           0 (0%)         20m\n  kube-system                 kube-controller-manager-k8s-upgrade-and-conformance-rbkcco-6qgfw-5qghq    200m (2%)     0 (0%)      0 (0%)           0 (0%)         18m\n  kube-system                 kube-proxy-nbrv6                                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m\n  kube-system                 kube-scheduler-k8s-upgrade-and-conformance-rbkcco-6qgfw-5qghq             100m (1%)     0 (0%)      0 (0%)           0 (0%)         20m\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                750m (9%)   100m (1%)\n  memory             150Mi (0%)  50Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\n  hugepages-1Gi      0 (0%)      0 (0%)\n  hugepages-2Mi      0 (0%)      0 (0%)\nEvents:\n  Type    Reason    Age   From        Message\n  ----    ------    ----  ----        -------\n  Normal  Starting  17m   kube-proxy  \n  Normal  Starting  20m   kube-proxy  Starting kube-proxy.\n"
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:58:43.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-4125" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":-1,"completed":37,"skipped":554,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] EndpointSliceMirroring
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:58:45.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslicemirroring-8528" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":40,"skipped":925,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:58:43.569: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should allow substituting values in a volume subpath [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test substitution in volume subpath
    Sep  5 15:58:43.628: INFO: Waiting up to 5m0s for pod "var-expansion-0da97f00-33b5-4524-ac8d-f007f4bb897c" in namespace "var-expansion-7198" to be "Succeeded or Failed"

    Sep  5 15:58:43.634: INFO: Pod "var-expansion-0da97f00-33b5-4524-ac8d-f007f4bb897c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.622251ms
    Sep  5 15:58:45.639: INFO: Pod "var-expansion-0da97f00-33b5-4524-ac8d-f007f4bb897c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009788554s
    Sep  5 15:58:47.644: INFO: Pod "var-expansion-0da97f00-33b5-4524-ac8d-f007f4bb897c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015425639s
    STEP: Saw pod success
    Sep  5 15:58:47.644: INFO: Pod "var-expansion-0da97f00-33b5-4524-ac8d-f007f4bb897c" satisfied condition "Succeeded or Failed"

    Sep  5 15:58:47.648: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod var-expansion-0da97f00-33b5-4524-ac8d-f007f4bb897c container dapi-container: <nil>
    STEP: delete the pod
    Sep  5 15:58:47.675: INFO: Waiting for pod var-expansion-0da97f00-33b5-4524-ac8d-f007f4bb897c to disappear
    Sep  5 15:58:47.682: INFO: Pod var-expansion-0da97f00-33b5-4524-ac8d-f007f4bb897c no longer exists
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:58:47.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-7198" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":38,"skipped":600,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-network] EndpointSlice
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:59:15.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslice-7436" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":41,"skipped":953,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 16 lines ...
    Sep  5 15:58:49.800: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7070.svc.cluster.local from pod dns-7070/dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179: the server could not find the requested resource (get pods dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179)
    Sep  5 15:58:49.805: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7070.svc.cluster.local from pod dns-7070/dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179: the server could not find the requested resource (get pods dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179)
    Sep  5 15:58:49.843: INFO: Unable to read jessie_udp@dns-test-service.dns-7070.svc.cluster.local from pod dns-7070/dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179: the server could not find the requested resource (get pods dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179)
    Sep  5 15:58:49.849: INFO: Unable to read jessie_tcp@dns-test-service.dns-7070.svc.cluster.local from pod dns-7070/dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179: the server could not find the requested resource (get pods dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179)
    Sep  5 15:58:49.854: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7070.svc.cluster.local from pod dns-7070/dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179: the server could not find the requested resource (get pods dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179)
    Sep  5 15:58:49.858: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7070.svc.cluster.local from pod dns-7070/dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179: the server could not find the requested resource (get pods dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179)
    Sep  5 15:58:49.884: INFO: Lookups using dns-7070/dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179 failed for: [wheezy_udp@dns-test-service.dns-7070.svc.cluster.local wheezy_tcp@dns-test-service.dns-7070.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7070.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7070.svc.cluster.local jessie_udp@dns-test-service.dns-7070.svc.cluster.local jessie_tcp@dns-test-service.dns-7070.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7070.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7070.svc.cluster.local]

    
    Sep  5 15:58:54.891: INFO: Unable to read wheezy_udp@dns-test-service.dns-7070.svc.cluster.local from pod dns-7070/dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179: the server could not find the requested resource (get pods dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179)
    Sep  5 15:58:54.897: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7070.svc.cluster.local from pod dns-7070/dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179: the server could not find the requested resource (get pods dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179)
    Sep  5 15:58:54.951: INFO: Unable to read jessie_udp@dns-test-service.dns-7070.svc.cluster.local from pod dns-7070/dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179: the server could not find the requested resource (get pods dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179)
    Sep  5 15:58:54.956: INFO: Unable to read jessie_tcp@dns-test-service.dns-7070.svc.cluster.local from pod dns-7070/dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179: the server could not find the requested resource (get pods dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179)
    Sep  5 15:58:55.004: INFO: Lookups using dns-7070/dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179 failed for: [wheezy_udp@dns-test-service.dns-7070.svc.cluster.local wheezy_tcp@dns-test-service.dns-7070.svc.cluster.local jessie_udp@dns-test-service.dns-7070.svc.cluster.local jessie_tcp@dns-test-service.dns-7070.svc.cluster.local]

    
    Sep  5 15:58:59.890: INFO: Unable to read wheezy_udp@dns-test-service.dns-7070.svc.cluster.local from pod dns-7070/dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179: the server could not find the requested resource (get pods dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179)
    Sep  5 15:58:59.894: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7070.svc.cluster.local from pod dns-7070/dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179: the server could not find the requested resource (get pods dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179)
    Sep  5 15:58:59.940: INFO: Unable to read jessie_udp@dns-test-service.dns-7070.svc.cluster.local from pod dns-7070/dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179: the server could not find the requested resource (get pods dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179)
    Sep  5 15:58:59.945: INFO: Unable to read jessie_tcp@dns-test-service.dns-7070.svc.cluster.local from pod dns-7070/dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179: the server could not find the requested resource (get pods dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179)
    Sep  5 15:58:59.991: INFO: Lookups using dns-7070/dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179 failed for: [wheezy_udp@dns-test-service.dns-7070.svc.cluster.local wheezy_tcp@dns-test-service.dns-7070.svc.cluster.local jessie_udp@dns-test-service.dns-7070.svc.cluster.local jessie_tcp@dns-test-service.dns-7070.svc.cluster.local]

    
    Sep  5 15:59:04.892: INFO: Unable to read wheezy_udp@dns-test-service.dns-7070.svc.cluster.local from pod dns-7070/dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179: the server could not find the requested resource (get pods dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179)
    Sep  5 15:59:04.897: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7070.svc.cluster.local from pod dns-7070/dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179: the server could not find the requested resource (get pods dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179)
    Sep  5 15:59:04.938: INFO: Unable to read jessie_udp@dns-test-service.dns-7070.svc.cluster.local from pod dns-7070/dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179: the server could not find the requested resource (get pods dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179)
    Sep  5 15:59:04.943: INFO: Unable to read jessie_tcp@dns-test-service.dns-7070.svc.cluster.local from pod dns-7070/dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179: the server could not find the requested resource (get pods dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179)
    Sep  5 15:59:04.986: INFO: Lookups using dns-7070/dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179 failed for: [wheezy_udp@dns-test-service.dns-7070.svc.cluster.local wheezy_tcp@dns-test-service.dns-7070.svc.cluster.local jessie_udp@dns-test-service.dns-7070.svc.cluster.local jessie_tcp@dns-test-service.dns-7070.svc.cluster.local]

    
    Sep  5 15:59:09.889: INFO: Unable to read wheezy_udp@dns-test-service.dns-7070.svc.cluster.local from pod dns-7070/dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179: the server could not find the requested resource (get pods dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179)
    Sep  5 15:59:09.895: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7070.svc.cluster.local from pod dns-7070/dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179: the server could not find the requested resource (get pods dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179)
    Sep  5 15:59:09.932: INFO: Unable to read jessie_udp@dns-test-service.dns-7070.svc.cluster.local from pod dns-7070/dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179: the server could not find the requested resource (get pods dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179)
    Sep  5 15:59:09.937: INFO: Unable to read jessie_tcp@dns-test-service.dns-7070.svc.cluster.local from pod dns-7070/dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179: the server could not find the requested resource (get pods dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179)
    Sep  5 15:59:09.970: INFO: Lookups using dns-7070/dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179 failed for: [wheezy_udp@dns-test-service.dns-7070.svc.cluster.local wheezy_tcp@dns-test-service.dns-7070.svc.cluster.local jessie_udp@dns-test-service.dns-7070.svc.cluster.local jessie_tcp@dns-test-service.dns-7070.svc.cluster.local]

    
    Sep  5 15:59:14.889: INFO: Unable to read wheezy_udp@dns-test-service.dns-7070.svc.cluster.local from pod dns-7070/dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179: the server could not find the requested resource (get pods dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179)
    Sep  5 15:59:14.893: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7070.svc.cluster.local from pod dns-7070/dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179: the server could not find the requested resource (get pods dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179)
    Sep  5 15:59:14.928: INFO: Unable to read jessie_udp@dns-test-service.dns-7070.svc.cluster.local from pod dns-7070/dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179: the server could not find the requested resource (get pods dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179)
    Sep  5 15:59:14.932: INFO: Unable to read jessie_tcp@dns-test-service.dns-7070.svc.cluster.local from pod dns-7070/dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179: the server could not find the requested resource (get pods dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179)
    Sep  5 15:59:14.969: INFO: Lookups using dns-7070/dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179 failed for: [wheezy_udp@dns-test-service.dns-7070.svc.cluster.local wheezy_tcp@dns-test-service.dns-7070.svc.cluster.local jessie_udp@dns-test-service.dns-7070.svc.cluster.local jessie_tcp@dns-test-service.dns-7070.svc.cluster.local]

    
    Sep  5 15:59:19.986: INFO: DNS probes using dns-7070/dns-test-d9b8b451-f5dc-4d5e-a325-45c83626e179 succeeded
    
    STEP: deleting the pod
    STEP: deleting the test service
    STEP: deleting the test headless service
    [AfterEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:59:20.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-7070" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":-1,"completed":39,"skipped":602,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] DisruptionController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:59:26.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "disruption-4313" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]","total":-1,"completed":40,"skipped":619,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:59:31.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-501" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]","total":-1,"completed":41,"skipped":648,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 22 lines ...
    STEP: Destroying namespace "webhook-6573-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":42,"skipped":651,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 15:59:40.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-9818" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":661,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSS
    ------------------------------
    {"msg":"FAILED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":3,"skipped":50,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]"]}

    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 15:55:09.566: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename dns
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 6 lines ...
    
    STEP: creating a pod to probe DNS
    STEP: submitting the pod to kubernetes
    STEP: retrieving the pod
    STEP: looking for the results for each expected name from probers
    Sep  5 15:58:45.681: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-1720.svc.cluster.local from pod dns-1720/dns-test-48b1eef6-1711-406d-84dd-c1c847ab259d: the server is currently unable to handle the request (get pods dns-test-48b1eef6-1711-406d-84dd-c1c847ab259d)
    Sep  5 16:00:11.640: FAIL: Unable to read jessie_udp@dns-test-service-3.dns-1720.svc.cluster.local from pod dns-1720/dns-test-48b1eef6-1711-406d-84dd-c1c847ab259d: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-1720/pods/dns-test-48b1eef6-1711-406d-84dd-c1c847ab259d/proxy/results/jessie_udp@dns-test-service-3.dns-1720.svc.cluster.local": context deadline exceeded

    
    Full Stack Trace
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x78de4a8, 0xc00011c010, 0x7f85c239af18, 0x18, 0xc0001a3008)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x78de4a8, 0xc00011c010, 0xc00046afd0, 0x2a14500, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f
... skipping 15 lines ...
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b
    testing.tRunner(0xc000c4ac00, 0x729a2d8)
    	/usr/local/go/src/testing/testing.go:1203 +0xe5
    created by testing.(*T).Run
    	/usr/local/go/src/testing/testing.go:1248 +0x2b3
    E0905 16:00:11.641750      17 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Sep  5 16:00:11.640: Unable to read jessie_udp@dns-test-service-3.dns-1720.svc.cluster.local from pod dns-1720/dns-test-48b1eef6-1711-406d-84dd-c1c847ab259d: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-1720/pods/dns-test-48b1eef6-1711-406d-84dd-c1c847ab259d/proxy/results/jessie_udp@dns-test-service-3.dns-1720.svc.cluster.local\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:217, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x78de4a8, 0xc00011c010, 0x7f85c239af18, 0x18, 0xc0001a3008)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x78de4a8, 0xc00011c010, 0xc00046afd0, 0x2a14500, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll(0x78de4a8, 0xc00011c010, 0xc0001a3001, 0xc0001a3008, 0xc00046afd0, 0x6826620, 0xc00046afd0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:577 +0xe5\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext(0x78de4a8, 0xc00011c010, 0x12a05f200, 0x8bb2c97000, 0xc00046afd0, 0x6d6e4e0, 0x2521201)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:523 +0x66\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc0010e89a0, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:509 +0x6f\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc003b17660, 0x2, 0x2, 0x702fe9b, 0x7, 0xc0004e1000, 0x7971668, 0xc0023f1600, 0x1, 0x70515b7, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463 +0x13c\nk8s.io/kubernetes/test/e2e/network.validateTargetedProbeOutput(0xc0012271e0, 0xc0004e1000, 0xc003b17660, 0x2, 0x2, 0x70515b7, 0x10)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:548 +0x376\nk8s.io/kubernetes/test/e2e/network.glob..func2.9()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:354 +0x6ed\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc000c4ac00)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc000c4ac00)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b\ntesting.tRunner(0xc000c4ac00, 0x729a2d8)\n\t/usr/local/go/src/testing/testing.go:1203 +0xe5\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1248 +0x2b3"} (
    Your test failed.

    Ginkgo panics to prevent subsequent assertions from running.
    Normally Ginkgo rescues this panic so you shouldn't see it.
    
    But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
    To circumvent this, you should call
    
... skipping 5 lines ...
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6bbe4c0, 0xc003b18180)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
    panic(0x6bbe4c0, 0xc003b18180)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc0003b2300, 0x16b, 0x88abe86, 0x7d, 0xd9, 0xc000d40a80, 0x9fe)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
    panic(0x62ef260, 0x77956f0)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc0003b2300, 0x16b, 0xc003ffde88, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:260 +0xc8
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc0003b2300, 0x16b, 0xc003ffdf70, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
    k8s.io/kubernetes/test/e2e/framework.Failf(0x70d3e4f, 0x24, 0xc003ffe1d0, 0x4, 0x4)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
    k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:480 +0xab1
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x78de4a8, 0xc00011c010, 0x7f85c239af18, 0x18, 0xc0001a3008)
... skipping 57 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  5 16:00:11.640: Unable to read jessie_udp@dns-test-service-3.dns-1720.svc.cluster.local from pod dns-1720/dns-test-48b1eef6-1711-406d-84dd-c1c847ab259d: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-1720/pods/dns-test-48b1eef6-1711-406d-84dd-c1c847ab259d/proxy/results/jessie_udp@dns-test-service-3.dns-1720.svc.cluster.local": context deadline exceeded
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217
    ------------------------------
    {"msg":"FAILED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":3,"skipped":50,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  5 16:00:11.748: INFO: Waiting up to 5m0s for pod "downwardapi-volume-85eb000f-9524-4464-8300-d71fb3495911" in namespace "downward-api-4436" to be "Succeeded or Failed"

    Sep  5 16:00:11.754: INFO: Pod "downwardapi-volume-85eb000f-9524-4464-8300-d71fb3495911": Phase="Pending", Reason="", readiness=false. Elapsed: 4.698554ms
    Sep  5 16:00:13.758: INFO: Pod "downwardapi-volume-85eb000f-9524-4464-8300-d71fb3495911": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009402913s
    Sep  5 16:00:15.767: INFO: Pod "downwardapi-volume-85eb000f-9524-4464-8300-d71fb3495911": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018539315s
    STEP: Saw pod success
    Sep  5 16:00:15.767: INFO: Pod "downwardapi-volume-85eb000f-9524-4464-8300-d71fb3495911" satisfied condition "Succeeded or Failed"

    Sep  5 16:00:15.774: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod downwardapi-volume-85eb000f-9524-4464-8300-d71fb3495911 container client-container: <nil>
    STEP: delete the pod
    Sep  5 16:00:15.799: INFO: Waiting for pod downwardapi-volume-85eb000f-9524-4464-8300-d71fb3495911 to disappear
    Sep  5 16:00:15.802: INFO: Pod downwardapi-volume-85eb000f-9524-4464-8300-d71fb3495911 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:00:15.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-4436" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":54,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:00:15.878: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable via the environment [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap configmap-5187/configmap-test-886634a3-df9c-444b-9f52-8d11f4b085d1
    STEP: Creating a pod to test consume configMaps
    Sep  5 16:00:15.939: INFO: Waiting up to 5m0s for pod "pod-configmaps-5a719e58-22de-44ec-bc90-06c12feca53d" in namespace "configmap-5187" to be "Succeeded or Failed"

    Sep  5 16:00:15.942: INFO: Pod "pod-configmaps-5a719e58-22de-44ec-bc90-06c12feca53d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.465263ms
    Sep  5 16:00:17.947: INFO: Pod "pod-configmaps-5a719e58-22de-44ec-bc90-06c12feca53d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008732218s
    Sep  5 16:00:19.953: INFO: Pod "pod-configmaps-5a719e58-22de-44ec-bc90-06c12feca53d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014712635s
    STEP: Saw pod success
    Sep  5 16:00:19.954: INFO: Pod "pod-configmaps-5a719e58-22de-44ec-bc90-06c12feca53d" satisfied condition "Succeeded or Failed"

    Sep  5 16:00:19.957: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod pod-configmaps-5a719e58-22de-44ec-bc90-06c12feca53d container env-test: <nil>
    STEP: delete the pod
    Sep  5 16:00:19.976: INFO: Waiting for pod pod-configmaps-5a719e58-22de-44ec-bc90-06c12feca53d to disappear
    Sep  5 16:00:19.979: INFO: Pod pod-configmaps-5a719e58-22de-44ec-bc90-06c12feca53d no longer exists
    [AfterEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:00:19.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-5187" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":98,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide container's cpu request [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  5 16:00:20.057: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d03ee1de-0688-4c0d-9e0f-50c03c3791d3" in namespace "projected-5853" to be "Succeeded or Failed"

    Sep  5 16:00:20.062: INFO: Pod "downwardapi-volume-d03ee1de-0688-4c0d-9e0f-50c03c3791d3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.785424ms
    Sep  5 16:00:22.066: INFO: Pod "downwardapi-volume-d03ee1de-0688-4c0d-9e0f-50c03c3791d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009201213s
    Sep  5 16:00:24.075: INFO: Pod "downwardapi-volume-d03ee1de-0688-4c0d-9e0f-50c03c3791d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017777726s
    STEP: Saw pod success
    Sep  5 16:00:24.075: INFO: Pod "downwardapi-volume-d03ee1de-0688-4c0d-9e0f-50c03c3791d3" satisfied condition "Succeeded or Failed"

    Sep  5 16:00:24.081: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-0xh5an pod downwardapi-volume-d03ee1de-0688-4c0d-9e0f-50c03c3791d3 container client-container: <nil>
    STEP: delete the pod
    Sep  5 16:00:24.112: INFO: Waiting for pod downwardapi-volume-d03ee1de-0688-4c0d-9e0f-50c03c3791d3 to disappear
    Sep  5 16:00:24.117: INFO: Pod downwardapi-volume-d03ee1de-0688-4c0d-9e0f-50c03c3791d3 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:00:24.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-5853" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":105,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:00:24.138: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep  5 16:00:24.200: INFO: Waiting up to 5m0s for pod "downward-api-987f2cd9-f41c-45cd-b10a-6c7fbc46821a" in namespace "downward-api-5767" to be "Succeeded or Failed"

    Sep  5 16:00:24.207: INFO: Pod "downward-api-987f2cd9-f41c-45cd-b10a-6c7fbc46821a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.735767ms
    Sep  5 16:00:26.212: INFO: Pod "downward-api-987f2cd9-f41c-45cd-b10a-6c7fbc46821a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011655836s
    Sep  5 16:00:28.216: INFO: Pod "downward-api-987f2cd9-f41c-45cd-b10a-6c7fbc46821a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015927739s
    STEP: Saw pod success
    Sep  5 16:00:28.217: INFO: Pod "downward-api-987f2cd9-f41c-45cd-b10a-6c7fbc46821a" satisfied condition "Succeeded or Failed"

    Sep  5 16:00:28.220: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-0xh5an pod downward-api-987f2cd9-f41c-45cd-b10a-6c7fbc46821a container dapi-container: <nil>
    STEP: delete the pod
    Sep  5 16:00:28.239: INFO: Waiting for pod downward-api-987f2cd9-f41c-45cd-b10a-6c7fbc46821a to disappear
    Sep  5 16:00:28.243: INFO: Pod downward-api-987f2cd9-f41c-45cd-b10a-6c7fbc46821a no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:00:28.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-5767" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":110,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]"]}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:00:28.281: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0644 on tmpfs
    Sep  5 16:00:28.321: INFO: Waiting up to 5m0s for pod "pod-0f7ae8fd-ffca-4f99-b5d3-beb0f5cb00d7" in namespace "emptydir-7993" to be "Succeeded or Failed"

    Sep  5 16:00:28.327: INFO: Pod "pod-0f7ae8fd-ffca-4f99-b5d3-beb0f5cb00d7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.266562ms
    Sep  5 16:00:30.334: INFO: Pod "pod-0f7ae8fd-ffca-4f99-b5d3-beb0f5cb00d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012839616s
    Sep  5 16:00:32.339: INFO: Pod "pod-0f7ae8fd-ffca-4f99-b5d3-beb0f5cb00d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017778971s
    STEP: Saw pod success
    Sep  5 16:00:32.339: INFO: Pod "pod-0f7ae8fd-ffca-4f99-b5d3-beb0f5cb00d7" satisfied condition "Succeeded or Failed"

    Sep  5 16:00:32.344: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod pod-0f7ae8fd-ffca-4f99-b5d3-beb0f5cb00d7 container test-container: <nil>
    STEP: delete the pod
    Sep  5 16:00:32.368: INFO: Waiting for pod pod-0f7ae8fd-ffca-4f99-b5d3-beb0f5cb00d7 to disappear
    Sep  5 16:00:32.371: INFO: Pod pod-0f7ae8fd-ffca-4f99-b5d3-beb0f5cb00d7 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:00:32.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-7993" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":127,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Aggregator
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
    Sep  5 16:00:35.020: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63797990432, loc:(*time.Location)(0xa04a040)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63797990432, loc:(*time.Location)(0xa04a040)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63797990432, loc:(*time.Location)(0xa04a040)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63797990432, loc:(*time.Location)(0xa04a040)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)}
    Sep  5 16:00:37.025: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63797990432, loc:(*time.Location)(0xa04a040)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63797990432, loc:(*time.Location)(0xa04a040)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63797990432, loc:(*time.Location)(0xa04a040)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63797990432, loc:(*time.Location)(0xa04a040)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)}
    Sep  5 16:00:39.027: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63797990432, loc:(*time.Location)(0xa04a040)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63797990432, loc:(*time.Location)(0xa04a040)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63797990432, loc:(*time.Location)(0xa04a040)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63797990432, loc:(*time.Location)(0xa04a040)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)}
    Sep  5 16:01:41.242: INFO: Waited 1m0.203473544s for the sample-apiserver to be ready to handle requests.
    Sep  5 16:01:41.242: INFO: current APIService: {"metadata":{"name":"v1alpha1.wardle.example.com","uid":"5d3f3452-8bc5-48bc-bdc7-c15a4fc550d2","resourceVersion":"14447","creationTimestamp":"2022-09-05T16:00:41Z","managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"apiregistration.k8s.io/v1","time":"2022-09-05T16:00:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:spec":{"f:caBundle":{},"f:group":{},"f:groupPriorityMinimum":{},"f:service":{".":{},"f:name":{},"f:namespace":{},"f:port":{}},"f:version":{},"f:versionPriority":{}}}},{"manager":"kube-apiserver","operation":"Update","apiVersion":"apiregistration.k8s.io/v1","time":"2022-09-05T16:00:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}},"subresource":"status"}]},"spec":{"service":{"namespace":"aggregator-5742","name":"sample-api","port":7443},"group":"wardle.example.com","version":"v1alpha1","caBundle":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURGakNDQWY2Z0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFkTVJzd0dRWURWUVFERXhKbE1tVXQKYzJWeWRtVnlMV05sY25RdFkyRXdIaGNOTWpJd09UQTFNVFl3TURNeVdoY05Nekl3T1RBeU1UWXdNRE15V2pBZApNUnN3R1FZRFZRUURFeEpsTW1VdGMyVnlkbVZ5TFdObGNuUXRZMkV3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBCkE0SUJEd0F3Z2dFS0FvSUJBUUN6SXVrb0dqQWpKYTN2UGxXK213OGV0YXVETFpENVRLTWZkbFhmOERnb1NDbksKR0pEUzcwT0ZuQ1NKWG94UUc2OEFLUDNyNTl1REt1N3hwQVB0c1NVRVFqT20xZ3MvV0h3alY5YlJUUjlvTFJWOQpkWFg3c0ZwQW4xOUtwdWtDOGp3SC9mZ2ZqRmxYend0aXhvOFZ2R05kdlZjWGdEVFhaNGNNa3ZIWHR5cWpiN2x6CldKbW5pejVSelVqUkJhZ1AydUxIbWwvUkFEMzBIVThnU2VGdU5lM0tmT3pjVDBFQ2VocFhDZGtxNnpLVjFEbUEKelMzVzJJaGg3aDc3S3Zta0J0cnhGSEJwa0VoWXFOZUhTbDdXbmFEZlBRdGFTSG9DRkJ4bGxYK3BpWXpYTXRPMgpOVHJpVCt3V1BTUlAyam9kZnlvT05rWkF6RVBCVG5qN1hLd3hWNkxWQWdNQkFBR2pZVEJmTUE0R0ExVWREd0VCCi93UUVBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJTeUNrelpWMFZXR0RqNktYL1MKWHRoY0Mwa3ZnVEFkQmdOVkhSRUVGakFVZ2hKbE1tVXRjMlZ5ZG1WeUxXTmxjblF0WTJFd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBRWsrUlJ6OERtNmVqckZKdVE5N0lycUQvRmVVY1NmeDY3UzUyM3FKZ0cvSkhiQ0EwZEltCjFaWHdBOU1DTUNMSW1yUmxhRVhGVFE3ZzZTeTB1aGR6Qmc2aEU1a2l3cWIvc0JvS0ZBbDRZT2xYczlhbmx5VGoKUW9TRGRKN3grSW5IN3RZZTB4d0hqQlZ0bVRBNWF5MktDVVNJSUxiRXRmNkFjMm5ld0h3Rkx5RW1TRXVVQWNwNwpNUGRWZGVpYm9oa0ZOUVdyZVl0bFkrc3hZNURXdkdtcURrcmk0RVNQKzVGcG8yUWxqaHBoWVBya1FaMWU3dDh2CmNEVEwxOFVVSWNZNlp1a09TR1ZrQWlUY3Z5OTU2ZDdhdDExOHpPNzk3Sk1aNXVsZUh0bU9KMnBtY1N1MzFVSlYKL293QUFKWTVQaHYvU2tnTGg0Zjg5dE9SSTBZRXlYWVhFOVE9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K","groupPriorityMinimum":2000,"versionPriority":200},"status":{"conditions":[{"type":"Available","status":"False","lastTransitionTime":"2022-09-05T16:00:41Z","reason":"FailedDiscoveryCheck","message":"failing or missing response from https://10.139.115.45:7443/apis/wardle.example.com/v1alpha1: Get \"https://10.139.115.45:7443/apis/wardle.example.com/v1alpha1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"}]}}
    Sep  5 16:01:41.244: INFO: current pods: {"metadata":{"resourceVersion":"14463"},"items":[{"metadata":{"name":"sample-apiserver-deployment-64f6b9dc99-6nzbl","generateName":"sample-apiserver-deployment-64f6b9dc99-","namespace":"aggregator-5742","uid":"03e3480b-d88d-4c74-8b7f-6ed0aec35a71","resourceVersion":"14335","creationTimestamp":"2022-09-05T16:00:32Z","labels":{"apiserver":"true","app":"sample-apiserver","pod-template-hash":"64f6b9dc99"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"sample-apiserver-deployment-64f6b9dc99","uid":"59404b2a-dab0-4e8f-86a8-673fc94cf6dc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-05T16:00:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:apiserver":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"59404b2a-dab0-4e8f-86a8-673fc94cf6dc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"etcd\"}":{".":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}},"k:{\"name\":\"sample-apiserver\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/apiserver.local.config/certificates\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"apiserver-certs\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-05T16:00:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.42\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"apiserver-certs","secret":{"secretName":"sample-apiserver-secret","defaultMode":420}},{"name":"kube-api-access-vzqpt","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"sample-apiserver","image":"k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4","args":["--etcd-servers=http://127.0.0.1:2379","--tls-cert-file=/apiserver.local.config/certificates/tls.crt","--tls-private-key-file=/apiserver.local.config/certificates/tls.key","--audit-log-path=-","--audit-log-maxage=0","--audit-log-maxbackup=0"],"resources":{},"volumeMounts":[{"name":"apiserver-certs","readOnly":true,"mountPath":"/apiserver.local.config/certificates"},{"name":"kube-api-access-vzqpt","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"},{"name":"etcd","image":"k8s.gcr.io/etcd:3.4.13-0","command":["/usr/local/bin/etcd","--listen-client-urls","http://127.0.0.1:2379","--advertise-client-urls","http://127.0.0.1:2379"],"resources":{},"volumeMounts":[{"name":"kube-api-access-vzqpt","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"k8s-upgrade-and-conformance-rbkcco-worker-0xh5an","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-09-05T16:00:32Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-09-05T16:00:40Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-09-05T16:00:40Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-09-05T16:00:32Z"}],"hostIP":"172.18.0.6","podIP":"192.168.2.42","podIPs":[{"ip":"192.168.2.42"}],"startTime":"2022-09-05T16:00:32Z","containerStatuses":[{"name":"etcd","state":{"running":{"startedAt":"2022-09-05T16:00:39Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/etcd:3.4.13-0","imageID":"k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2","containerID":"containerd://e120fee53882c40d0c51d02e64128c01027f80ce65cf88f33d75b8cd53981f62","started":true},{"name":"sample-apiserver","state":{"running":{"startedAt":"2022-09-05T16:00:34Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4","imageID":"k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276","containerID":"containerd://032704134066ea8001a381e54138e083b3cc0c8c89d696741b0c919ecaf3b4e3","started":true}],"qosClass":"BestEffort"}}]}
    Sep  5 16:01:41.253: INFO: logs of sample-apiserver-deployment-64f6b9dc99-6nzbl/sample-apiserver (error: <nil>): W0905 16:00:35.406586       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found

    W0905 16:00:35.406690       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
    I0905 16:00:35.438169       1 plugins.go:158] Loaded 3 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,MutatingAdmissionWebhook,BanFlunder.
    I0905 16:00:35.438210       1 plugins.go:161] Loaded 1 validating admission controller(s) successfully in the following order: ValidatingAdmissionWebhook.
    I0905 16:00:35.439753       1 client.go:361] parsed scheme: "endpoint"
    I0905 16:00:35.439796       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
    W0905 16:00:35.440224       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...

    I0905 16:00:36.071211       1 client.go:361] parsed scheme: "endpoint"
    I0905 16:00:36.071327       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
    W0905 16:00:36.072501       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...

    W0905 16:00:36.440916       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...

    W0905 16:00:37.073363       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...

    W0905 16:00:38.187728       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...

    W0905 16:00:38.751050       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...

    I0905 16:00:40.662769       1 client.go:361] parsed scheme: "endpoint"
    I0905 16:00:40.662906       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
    I0905 16:00:40.664986       1 client.go:361] parsed scheme: "endpoint"
    I0905 16:00:40.665039       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
    I0905 16:00:40.666840       1 client.go:361] parsed scheme: "endpoint"
    I0905 16:00:40.666894       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
... skipping 4 lines ...
    I0905 16:00:40.723733       1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
    I0905 16:00:40.723990       1 secure_serving.go:178] Serving securely on [::]:443
    I0905 16:00:40.724065       1 tlsconfig.go:219] Starting DynamicServingCertificateController
    I0905 16:00:40.823844       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
    I0905 16:00:40.823958       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
    
    Sep  5 16:01:41.264: INFO: logs of sample-apiserver-deployment-64f6b9dc99-6nzbl/etcd (error: <nil>): [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead

    2022-09-05 16:00:39.420743 I | etcdmain: etcd Version: 3.4.13
    2022-09-05 16:00:39.420876 I | etcdmain: Git SHA: ae9734ed2
    2022-09-05 16:00:39.420882 I | etcdmain: Go Version: go1.12.17
    2022-09-05 16:00:39.420942 I | etcdmain: Go OS/Arch: linux/amd64
    2022-09-05 16:00:39.420950 I | etcdmain: setting maximum number of CPUs to 8, total number of available CPUs is 8
    2022-09-05 16:00:39.420960 W | etcdmain: no data-dir provided, using default data-dir ./default.etcd
... skipping 26 lines ...
    2022-09-05 16:00:40.435180 N | etcdserver/membership: set the initial cluster version to 3.4
    2022-09-05 16:00:40.435280 I | etcdserver/api: enabled capabilities for version 3.4
    2022-09-05 16:00:40.435320 I | etcdserver: published {Name:default ClientURLs:[http://127.0.0.1:2379]} to cluster cdf818194e3a8c32
    2022-09-05 16:00:40.435474 I | embed: ready to serve client requests
    2022-09-05 16:00:40.436624 N | embed: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
    
    Sep  5 16:01:41.264: FAIL: gave up waiting for apiservice wardle to come up successfully

    Unexpected error:

        <*errors.errorString | 0xc0002bc280>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 22 lines ...
    [sig-api-machinery] Aggregator
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  5 16:01:41.264: gave up waiting for apiservice wardle to come up successfully
      Unexpected error:

          <*errors.errorString | 0xc0002bc280>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
... skipping 28 lines ...
    • [SLOW TEST:152.478 seconds]
    [sig-node] Probing container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should have monotonically increasing restart count [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":665,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:02:15.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-504" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":-1,"completed":45,"skipped":671,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
    STEP: Destroying namespace "webhook-6241-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":46,"skipped":684,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:02:18.925: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 18 lines ...
    STEP: Destroying namespace "webhook-6552-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":47,"skipped":684,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:02:28.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "init-container-9579" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":48,"skipped":690,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:02:30.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-9298" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":49,"skipped":694,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":8,"skipped":163,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    [BeforeEach] [sig-api-machinery] Aggregator
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:01:41.649: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename aggregator
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 3 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Registering the sample API server.
    Sep  5 16:01:42.490: INFO: new replicaset for deployment "sample-apiserver-deployment" is yet to be created
    Sep  5 16:01:44.596: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63797990502, loc:(*time.Location)(0xa04a040)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63797990502, loc:(*time.Location)(0xa04a040)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63797990502, loc:(*time.Location)(0xa04a040)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63797990502, loc:(*time.Location)(0xa04a040)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)}
    Sep  5 16:02:46.812: INFO: Waited 1m0.200605027s for the sample-apiserver to be ready to handle requests.
    Sep  5 16:02:46.812: INFO: current APIService: {"metadata":{"name":"v1alpha1.wardle.example.com","uid":"d05b752f-c1c7-4031-abf3-39049fbffa84","resourceVersion":"14930","creationTimestamp":"2022-09-05T16:01:46Z","managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"apiregistration.k8s.io/v1","time":"2022-09-05T16:01:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:spec":{"f:caBundle":{},"f:group":{},"f:groupPriorityMinimum":{},"f:service":{".":{},"f:name":{},"f:namespace":{},"f:port":{}},"f:version":{},"f:versionPriority":{}}}},{"manager":"kube-apiserver","operation":"Update","apiVersion":"apiregistration.k8s.io/v1","time":"2022-09-05T16:01:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}},"subresource":"status"}]},"spec":{"service":{"namespace":"aggregator-9013","name":"sample-api","port":7443},"group":"wardle.example.com","version":"v1alpha1","caBundle":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURGakNDQWY2Z0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFkTVJzd0dRWURWUVFERXhKbE1tVXQKYzJWeWRtVnlMV05sY25RdFkyRXdIaGNOTWpJd09UQTFNVFl3TVRReVdoY05Nekl3T1RBeU1UWXdNVFF5V2pBZApNUnN3R1FZRFZRUURFeEpsTW1VdGMyVnlkbVZ5TFdObGNuUXRZMkV3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBCkE0SUJEd0F3Z2dFS0FvSUJBUURKMm8xOHpnbHFxV20xOVRhanZ4MjlnR0RJZUVGR1dZWEdIejJKdXIzMnRsajIKZVFOREU1c3FMOU9DS3FUSlpjVjY4Uno4aWcyTkdqaEdrRlZmbFhBRkZ1VWFLRXg5bWJMY01tOWRSUlo0aGhNYQpEM2ZUZWFvaXltekNBL2s4V0JZdlJ3b0lyd2JsSEVlcXU1dWFucWhvam5XTFpoWkZBTXZ1NXhGZ01SN25Bd1RRCkpHZDhENHFpNFUyeDR5VUsrQ0xQTzZKWk1TZ0pVWUlsNzVNaWhqYzZ5blh3VTVWZUNCanNFL25iaGxkMjNZdFQKWXdkeXhudE43U2FTUzl6bFdiNXd3SHpVY0tkRlV6V1kweUJZd3VNZ1NzelVaTGtJeFEyUllETVB4VzlyeCtHMQplRWZBU0JVZmliWW80UUhGdUZ0bnRheGhvVS9TWDdXV3hPMk9KaG5IQWdNQkFBR2pZVEJmTUE0R0ExVWREd0VCCi93UUVBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJTYUIvWlhJT3hTQll1VUcvMlYKY08wRjFtUW9WekFkQmdOVkhSRUVGakFVZ2hKbE1tVXRjMlZ5ZG1WeUxXTmxjblF0WTJFd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBRHpXS0R2RWV0N0xyYVViOEY4c3RaeG5WKzNXNndhWHl2d3krMWJRVStHcHFpWFdvNWJHCjR0eE11bzJSYUtlM3lieHlWR0NPMnAyejJvcWJ1R0VLaXdiK3VOZEdod2I3REJZdU4zTEFTS0ZSRjVKVlRSMGoKek5Da1hjZDNGSHRWRG1MaXNibEJjNkhGU2xPL1huUC96QnJEaFdVUnZMaDJqdTJNRU1XTTh6TGxRcWRXak0zbQpjT1FJdHNXOWxHVXFnVW5SOHpOcy90dVJ6YUtDSmtKWEZ0R21MT0JOeEdwZXYwbGRQd21QbTVoajN3aXpOWFhHCndTUUwyMWhhM2RnSTBldVJPT2dkMUhoYlIyOHRBT08vOE1jTVFVU25qVEdhNE1NdEdhWEZXV2R0NW5TOU9ndC8KVEgxbUZlZ0prZlMrT0trQk1lNk5LSEdYSW5PQ1JuaWNHenM9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K","groupPriorityMinimum":2000,"versionPriority":200},"status":{"conditions":[{"type":"Available","status":"False","lastTransitionTime":"2022-09-05T16:01:46Z","reason":"FailedDiscoveryCheck","message":"failing or missing response from https://10.133.196.83:7443/apis/wardle.example.com/v1alpha1: Get \"https://10.133.196.83:7443/apis/wardle.example.com/v1alpha1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"}]}}
    Sep  5 16:02:46.813: INFO: current pods: {"metadata":{"resourceVersion":"14943"},"items":[{"metadata":{"name":"sample-apiserver-deployment-64f6b9dc99-rb7l4","generateName":"sample-apiserver-deployment-64f6b9dc99-","namespace":"aggregator-9013","uid":"4b1a7282-7860-44db-ba94-893de54723ee","resourceVersion":"14529","creationTimestamp":"2022-09-05T16:01:42Z","labels":{"apiserver":"true","app":"sample-apiserver","pod-template-hash":"64f6b9dc99"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"sample-apiserver-deployment-64f6b9dc99","uid":"0410e3ae-1646-4414-9ea0-2722abea94c7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-05T16:01:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:apiserver":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0410e3ae-1646-4414-9ea0-2722abea94c7\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"etcd\"}":{".":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}},"k:{\"name\":\"sample-apiserver\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/apiserver.local.config/certificates\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"apiserver-certs\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-05T16:01:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.43\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"apiserver-certs","secret":{"secretName":"sample-apiserver-secret","defaultMode":420}},{"name":"kube-api-access-m9blj","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"sample-apiserver","image":"k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4","args":["--etcd-servers=http://127.0.0.1:2379","--tls-cert-file=/apiserver.local.config/certificates/tls.crt","--tls-private-key-file=/apiserver.local.config/certificates/tls.key","--audit-log-path=-","--audit-log-maxage=0","--audit-log-maxbackup=0"],"resources":{},"volumeMounts":[{"name":"apiserver-certs","readOnly":true,"mountPath":"/apiserver.local.config/certificates"},{"name":"kube-api-access-m9blj","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"},{"name":"etcd","image":"k8s.gcr.io/etcd:3.4.13-0","command":["/usr/local/bin/etcd","--listen-client-urls","http://127.0.0.1:2379","--advertise-client-urls","http://127.0.0.1:2379"],"resources":{},"volumeMounts":[{"name":"kube-api-access-m9blj","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"k8s-upgrade-and-conformance-rbkcco-worker-0xh5an","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-09-05T16:01:42Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-09-05T16:01:45Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-09-05T16:01:45Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-09-05T16:01:42Z"}],"hostIP":"172.18.0.6","podIP":"192.168.2.43","podIPs":[{"ip":"192.168.2.43"}],"startTime":"2022-09-05T16:01:42Z","containerStatuses":[{"name":"etcd","state":{"running":{"startedAt":"2022-09-05T16:01:43Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/etcd:3.4.13-0","imageID":"k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2","containerID":"containerd://41b7636e358c507c64a356ce35c5487db19ed18bfed360f883045c89c3b14fe8","started":true},{"name":"sample-apiserver","state":{"running":{"startedAt":"2022-09-05T16:01:44Z"}},"lastState":{"terminated":{"exitCode":255,"reason":"Error","startedAt":"2022-09-05T16:01:43Z","finishedAt":"2022-09-05T16:01:43Z","containerID":"containerd://bf0979398febd1bfcd57005475f595f67278c3fee24a24700bfbeaf6609d35be"}},"ready":true,"restartCount":1,"image":"k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4","imageID":"k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276","containerID":"containerd://947c6fd3351aeeeba99850cc36f322e8a3271ab6717f8665453c5618b9b101e2","started":true}],"qosClass":"BestEffort"}}]}

    Sep  5 16:02:46.820: INFO: logs of sample-apiserver-deployment-64f6b9dc99-rb7l4/sample-apiserver (error: <nil>): W0905 16:01:45.084947       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found

    W0905 16:01:45.085038       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
    I0905 16:01:45.106602       1 plugins.go:158] Loaded 3 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,MutatingAdmissionWebhook,BanFlunder.
    I0905 16:01:45.106639       1 plugins.go:161] Loaded 1 validating admission controller(s) successfully in the following order: ValidatingAdmissionWebhook.
    I0905 16:01:45.108120       1 client.go:361] parsed scheme: "endpoint"
    I0905 16:01:45.108318       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
    I0905 16:01:45.115027       1 client.go:361] parsed scheme: "endpoint"
... skipping 11 lines ...
    I0905 16:01:45.186814       1 tlsconfig.go:219] Starting DynamicServingCertificateController
    I0905 16:01:45.287578       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
    I0905 16:01:45.287590       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
    I0905 16:01:45.497064       1 client.go:361] parsed scheme: "endpoint"
    I0905 16:01:45.497151       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
    
    Sep  5 16:02:46.828: INFO: logs of sample-apiserver-deployment-64f6b9dc99-rb7l4/etcd (error: <nil>): [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead

    2022-09-05 16:01:43.355417 I | etcdmain: etcd Version: 3.4.13
    2022-09-05 16:01:43.355473 I | etcdmain: Git SHA: ae9734ed2
    2022-09-05 16:01:43.355478 I | etcdmain: Go Version: go1.12.17
    2022-09-05 16:01:43.355487 I | etcdmain: Go OS/Arch: linux/amd64
    2022-09-05 16:01:43.355491 I | etcdmain: setting maximum number of CPUs to 8, total number of available CPUs is 8
    2022-09-05 16:01:43.355498 W | etcdmain: no data-dir provided, using default data-dir ./default.etcd
... skipping 26 lines ...
    2022-09-05 16:01:43.470558 N | etcdserver/membership: set the initial cluster version to 3.4
    2022-09-05 16:01:43.470627 I | etcdserver: published {Name:default ClientURLs:[http://127.0.0.1:2379]} to cluster cdf818194e3a8c32
    2022-09-05 16:01:43.470646 I | etcdserver/api: enabled capabilities for version 3.4
    2022-09-05 16:01:43.470662 I | embed: ready to serve client requests
    2022-09-05 16:01:43.471465 N | embed: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
    
    Sep  5 16:02:46.828: FAIL: gave up waiting for apiservice wardle to come up successfully

    Unexpected error:

        <*errors.errorString | 0xc0002bc280>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 22 lines ...
    [sig-api-machinery] Aggregator
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  5 16:02:46.828: gave up waiting for apiservice wardle to come up successfully
      Unexpected error:

          <*errors.errorString | 0xc0002bc280>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:406
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":8,"skipped":163,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    [BeforeEach] [sig-api-machinery] Aggregator
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:02:47.175: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename aggregator
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:02:56.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "aggregator-674" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":9,"skipped":163,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 159 lines ...
    Sep  5 15:59:17.607: INFO: stderr: ""
    Sep  5 15:59:17.609: INFO: stdout: "deployment.apps/agnhost-replica created\n"
    STEP: validating guestbook app
    Sep  5 15:59:17.609: INFO: Waiting for all frontend pods to be Running.
    Sep  5 15:59:22.659: INFO: Waiting for frontend to serve content.
    Sep  5 15:59:22.671: INFO: Trying to add a new entry to the guestbook.
    Sep  5 16:02:55.538: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

    
    v1Status�
    
    Failureierror trying to reach service: read tcp 172.18.0.9:57678->192.168.2.38:80: read: connection reset by peer"ServiceUnavailable0�"
    Sep  5 16:03:00.540: FAIL: Cannot added new entry in 180 seconds.

    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/kubectl.glob..func1.7.2()
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:375 +0x159
    k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000986c00)
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 60 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:03:02.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-8755" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":10,"skipped":214,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":41,"skipped":976,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:03:01.393: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename kubectl
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 188 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:03:09.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-6208" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":42,"skipped":976,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:02:30.487: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename init-container
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
    [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating the pod
    Sep  5 16:02:30.533: INFO: PodSpec: initContainers in spec.initContainers
    Sep  5 16:03:10.608: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-1d3c505a-2545-4968-9870-7eb721ff357a", GenerateName:"", Namespace:"init-container-5058", SelfLink:"", UID:"df084508-1561-4c2b-8ce6-49b3b5e91170", ResourceVersion:"15448", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63797990550, loc:(*time.Location)(0xa04a040)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"533272381"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000b49b78), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000b49b90), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000b49ba8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000b49bc0), Subresource:"status"}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-k9qqg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc002cb2a00), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-k9qqg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-k9qqg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-k9qqg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0034806e8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"k8s-upgrade-and-conformance-rbkcco-worker-0xh5an", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0037498f0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003480760)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003480780)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003480788), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00348078c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0029b7960), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63797990550, loc:(*time.Location)(0xa04a040)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63797990550, loc:(*time.Location)(0xa04a040)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63797990550, loc:(*time.Location)(0xa04a040)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63797990550, loc:(*time.Location)(0xa04a040)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.6", PodIP:"192.168.2.44", PodIPs:[]v1.PodIP{v1.PodIP{IP:"192.168.2.44"}}, StartTime:(*v1.Time)(0xc000b49bf0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0037499d0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc003749a40)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"containerd://cb9bef50d05834bf2731e90f9b40fa7fc6a86658a933ea1d46b222138d72a5a6", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002cb2ac0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002cb2aa0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.5", ImageID:"", ContainerID:"", Started:(*bool)(0xc00348080f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}

    [AfterEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:03:10.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "init-container-5058" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":50,"skipped":705,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-instrumentation] Events
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:03:10.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-3281" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":51,"skipped":717,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:03:12.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-7618" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":-1,"completed":43,"skipped":980,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:03:10.794: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename containers
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test override arguments
    Sep  5 16:03:10.849: INFO: Waiting up to 5m0s for pod "client-containers-91d791b3-c8cb-435b-b738-01d342f31991" in namespace "containers-8046" to be "Succeeded or Failed"

    Sep  5 16:03:10.854: INFO: Pod "client-containers-91d791b3-c8cb-435b-b738-01d342f31991": Phase="Pending", Reason="", readiness=false. Elapsed: 5.255765ms
    Sep  5 16:03:12.859: INFO: Pod "client-containers-91d791b3-c8cb-435b-b738-01d342f31991": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010451248s
    Sep  5 16:03:14.864: INFO: Pod "client-containers-91d791b3-c8cb-435b-b738-01d342f31991": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014938524s
    STEP: Saw pod success
    Sep  5 16:03:14.864: INFO: Pod "client-containers-91d791b3-c8cb-435b-b738-01d342f31991" satisfied condition "Succeeded or Failed"

    Sep  5 16:03:14.869: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-md-0-85nn6-54859d8bd4-grv26 pod client-containers-91d791b3-c8cb-435b-b738-01d342f31991 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  5 16:03:14.898: INFO: Waiting for pod client-containers-91d791b3-c8cb-435b-b738-01d342f31991 to disappear
    Sep  5 16:03:14.902: INFO: Pod client-containers-91d791b3-c8cb-435b-b738-01d342f31991 no longer exists
    [AfterEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:03:14.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "containers-8046" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":52,"skipped":720,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide container's memory limit [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  5 16:03:14.980: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fc9a901d-9fcb-4a3a-83e5-1eb730dbc80e" in namespace "downward-api-5580" to be "Succeeded or Failed"

    Sep  5 16:03:14.983: INFO: Pod "downwardapi-volume-fc9a901d-9fcb-4a3a-83e5-1eb730dbc80e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.226436ms
    Sep  5 16:03:16.987: INFO: Pod "downwardapi-volume-fc9a901d-9fcb-4a3a-83e5-1eb730dbc80e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007485045s
    Sep  5 16:03:18.992: INFO: Pod "downwardapi-volume-fc9a901d-9fcb-4a3a-83e5-1eb730dbc80e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012590214s
    STEP: Saw pod success
    Sep  5 16:03:18.992: INFO: Pod "downwardapi-volume-fc9a901d-9fcb-4a3a-83e5-1eb730dbc80e" satisfied condition "Succeeded or Failed"

    Sep  5 16:03:18.996: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-md-0-85nn6-54859d8bd4-grv26 pod downwardapi-volume-fc9a901d-9fcb-4a3a-83e5-1eb730dbc80e container client-container: <nil>
    STEP: delete the pod
    Sep  5 16:03:19.026: INFO: Waiting for pod downwardapi-volume-fc9a901d-9fcb-4a3a-83e5-1eb730dbc80e to disappear
    Sep  5 16:03:19.030: INFO: Pod downwardapi-volume-fc9a901d-9fcb-4a3a-83e5-1eb730dbc80e no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:03:19.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-5580" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":53,"skipped":732,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:03:19.043: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context-test
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
    [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep  5 16:03:19.089: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-0daf68d5-11da-4662-905a-7deab09f4faa" in namespace "security-context-test-9743" to be "Succeeded or Failed"

    Sep  5 16:03:19.093: INFO: Pod "alpine-nnp-false-0daf68d5-11da-4662-905a-7deab09f4faa": Phase="Pending", Reason="", readiness=false. Elapsed: 3.364186ms
    Sep  5 16:03:21.098: INFO: Pod "alpine-nnp-false-0daf68d5-11da-4662-905a-7deab09f4faa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008565888s
    Sep  5 16:03:23.103: INFO: Pod "alpine-nnp-false-0daf68d5-11da-4662-905a-7deab09f4faa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013855065s
    Sep  5 16:03:25.108: INFO: Pod "alpine-nnp-false-0daf68d5-11da-4662-905a-7deab09f4faa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019298709s
    Sep  5 16:03:25.109: INFO: Pod "alpine-nnp-false-0daf68d5-11da-4662-905a-7deab09f4faa" satisfied condition "Succeeded or Failed"

    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:03:25.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-test-9743" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":54,"skipped":733,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 35 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:03:31.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-9104" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":-1,"completed":55,"skipped":741,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] EndpointSlice
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:03:32.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslice-6036" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":56,"skipped":787,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] CronJob
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:03:32.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "cronjob-6943" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":57,"skipped":793,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:03:32.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-9448" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":44,"skipped":990,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 34 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:03:37.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-2144" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]","total":-1,"completed":58,"skipped":805,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
    STEP: Destroying namespace "webhook-42-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":45,"skipped":1017,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] RuntimeClass
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 19 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:03:39.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "runtimeclass-593" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] RuntimeClass  should support RuntimeClasses API operations [Conformance]","total":-1,"completed":46,"skipped":1032,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:03:45.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-5635" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":-1,"completed":47,"skipped":1081,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Servers with support for Table transformation
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:03:46.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "tables-6490" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":48,"skipped":1194,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:03:48.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-2850" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":59,"skipped":819,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:03:46.185: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0644 on tmpfs
    Sep  5 16:03:46.225: INFO: Waiting up to 5m0s for pod "pod-ec01568b-9898-4683-a0a0-6e25db718f36" in namespace "emptydir-7343" to be "Succeeded or Failed"

    Sep  5 16:03:46.228: INFO: Pod "pod-ec01568b-9898-4683-a0a0-6e25db718f36": Phase="Pending", Reason="", readiness=false. Elapsed: 3.121434ms
    Sep  5 16:03:48.233: INFO: Pod "pod-ec01568b-9898-4683-a0a0-6e25db718f36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00870085s
    Sep  5 16:03:50.238: INFO: Pod "pod-ec01568b-9898-4683-a0a0-6e25db718f36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013304773s
    STEP: Saw pod success
    Sep  5 16:03:50.238: INFO: Pod "pod-ec01568b-9898-4683-a0a0-6e25db718f36" satisfied condition "Succeeded or Failed"

    Sep  5 16:03:50.241: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-md-0-85nn6-54859d8bd4-grv26 pod pod-ec01568b-9898-4683-a0a0-6e25db718f36 container test-container: <nil>
    STEP: delete the pod
    Sep  5 16:03:50.258: INFO: Waiting for pod pod-ec01568b-9898-4683-a0a0-6e25db718f36 to disappear
    Sep  5 16:03:50.263: INFO: Pod pod-ec01568b-9898-4683-a0a0-6e25db718f36 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:03:50.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-7343" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":49,"skipped":1207,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:03:50.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-4379" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":60,"skipped":834,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Ingress API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 36 lines ...
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-99b614df-ca45-43f6-b0df-3da5e43bfed4
    STEP: Creating a pod to test consume secrets
    Sep  5 16:03:50.356: INFO: Waiting up to 5m0s for pod "pod-secrets-c56deaae-ec2f-4414-b95d-428bb072dbd1" in namespace "secrets-4539" to be "Succeeded or Failed"

    Sep  5 16:03:50.361: INFO: Pod "pod-secrets-c56deaae-ec2f-4414-b95d-428bb072dbd1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.479928ms
    Sep  5 16:03:52.368: INFO: Pod "pod-secrets-c56deaae-ec2f-4414-b95d-428bb072dbd1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011947918s
    Sep  5 16:03:54.374: INFO: Pod "pod-secrets-c56deaae-ec2f-4414-b95d-428bb072dbd1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018185138s
    STEP: Saw pod success
    Sep  5 16:03:54.374: INFO: Pod "pod-secrets-c56deaae-ec2f-4414-b95d-428bb072dbd1" satisfied condition "Succeeded or Failed"

    Sep  5 16:03:54.380: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-md-0-85nn6-54859d8bd4-grv26 pod pod-secrets-c56deaae-ec2f-4414-b95d-428bb072dbd1 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep  5 16:03:54.406: INFO: Waiting for pod pod-secrets-c56deaae-ec2f-4414-b95d-428bb072dbd1 to disappear
    Sep  5 16:03:54.410: INFO: Pod pod-secrets-c56deaae-ec2f-4414-b95d-428bb072dbd1 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:03:54.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-4539" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":50,"skipped":1228,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:03:54.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-9702" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":-1,"completed":51,"skipped":1231,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:03:54.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-7481" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":61,"skipped":847,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:03:50.760: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-map-4648f0a5-5c06-40bb-837d-bdb247370492
    STEP: Creating a pod to test consume configMaps
    Sep  5 16:03:50.819: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-75a18754-8529-4b18-adde-95a98817c7fc" in namespace "projected-2790" to be "Succeeded or Failed"

    Sep  5 16:03:50.824: INFO: Pod "pod-projected-configmaps-75a18754-8529-4b18-adde-95a98817c7fc": Phase="Pending", Reason="", readiness=false. Elapsed: 5.187276ms
    Sep  5 16:03:52.831: INFO: Pod "pod-projected-configmaps-75a18754-8529-4b18-adde-95a98817c7fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01142249s
    Sep  5 16:03:54.856: INFO: Pod "pod-projected-configmaps-75a18754-8529-4b18-adde-95a98817c7fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036918805s
    STEP: Saw pod success
    Sep  5 16:03:54.856: INFO: Pod "pod-projected-configmaps-75a18754-8529-4b18-adde-95a98817c7fc" satisfied condition "Succeeded or Failed"

    Sep  5 16:03:54.877: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-0xh5an pod pod-projected-configmaps-75a18754-8529-4b18-adde-95a98817c7fc container agnhost-container: <nil>
    STEP: delete the pod
    Sep  5 16:03:54.898: INFO: Waiting for pod pod-projected-configmaps-75a18754-8529-4b18-adde-95a98817c7fc to disappear
    Sep  5 16:03:54.903: INFO: Pod pod-projected-configmaps-75a18754-8529-4b18-adde-95a98817c7fc no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:03:54.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-2790" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":62,"skipped":847,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 48 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:04:19.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pod-network-test-1346" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":63,"skipped":878,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
    STEP: Destroying namespace "webhook-5871-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":64,"skipped":898,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:04:30.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-3409" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":65,"skipped":908,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:04:30.400: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
    Sep  5 16:04:30.447: INFO: Waiting up to 5m0s for pod "security-context-465f9b8f-245d-4ed2-b546-9af89980a5ef" in namespace "security-context-5874" to be "Succeeded or Failed"

    Sep  5 16:04:30.452: INFO: Pod "security-context-465f9b8f-245d-4ed2-b546-9af89980a5ef": Phase="Pending", Reason="", readiness=false. Elapsed: 5.016128ms
    Sep  5 16:04:32.457: INFO: Pod "security-context-465f9b8f-245d-4ed2-b546-9af89980a5ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00986816s
    Sep  5 16:04:34.463: INFO: Pod "security-context-465f9b8f-245d-4ed2-b546-9af89980a5ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01579676s
    STEP: Saw pod success
    Sep  5 16:04:34.463: INFO: Pod "security-context-465f9b8f-245d-4ed2-b546-9af89980a5ef" satisfied condition "Succeeded or Failed"

    Sep  5 16:04:34.469: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-md-0-85nn6-54859d8bd4-vcz8p pod security-context-465f9b8f-245d-4ed2-b546-9af89980a5ef container test-container: <nil>
    STEP: delete the pod
    Sep  5 16:04:34.509: INFO: Waiting for pod security-context-465f9b8f-245d-4ed2-b546-9af89980a5ef to disappear
    Sep  5 16:04:34.515: INFO: Pod security-context-465f9b8f-245d-4ed2-b546-9af89980a5ef no longer exists
    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:04:34.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-5874" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":66,"skipped":924,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":52,"skipped":1252,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:03:54.725: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename services
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 130 lines ...
    Sep  5 16:06:01.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1574 exec execpod-affinityjg4tx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  5 16:06:03.245: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n"
    Sep  5 16:06:03.245: INFO: stdout: ""
    Sep  5 16:06:03.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1574 exec execpod-affinityjg4tx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  5 16:06:05.402: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n"
    Sep  5 16:06:05.402: INFO: stdout: ""
    Sep  5 16:06:05.403: FAIL: Unexpected error:

        <*errors.errorString | 0xc002c683e0>: {
            s: "service is not reachable within 2m0s timeout on endpoint affinity-nodeport-transition:80 over TCP protocol",
        }
        service is not reachable within 2m0s timeout on endpoint affinity-nodeport-transition:80 over TCP protocol
    occurred
    
... skipping 27 lines ...
    • Failure [132.593 seconds]
    [sig-network] Services
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
      should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  5 16:06:05.403: Unexpected error:

          <*errors.errorString | 0xc002c683e0>: {
              s: "service is not reachable within 2m0s timeout on endpoint affinity-nodeport-transition:80 over TCP protocol",
          }
          service is not reachable within 2m0s timeout on endpoint affinity-nodeport-transition:80 over TCP protocol
      occurred
    
... skipping 23 lines ...
    • [SLOW TEST:242.693 seconds]
    [sig-node] Probing container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":67,"skipped":1050,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:08:54.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-6572" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":68,"skipped":1073,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] CronJob
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
    • [SLOW TEST:358.100 seconds]
    [sig-apps] CronJob
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
      should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":-1,"completed":11,"skipped":247,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:09:04.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-9998" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":259,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:09:04.519: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename services
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 8 lines ...
    STEP: Destroying namespace "services-1552" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":13,"skipped":259,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:09:09.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "init-container-7038" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":14,"skipped":269,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:09:09.589: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context-test
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
    [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep  5 16:09:09.665: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-b8b6c13b-1afa-4892-bca1-1174770fb7a4" in namespace "security-context-test-9768" to be "Succeeded or Failed"

    Sep  5 16:09:09.677: INFO: Pod "busybox-privileged-false-b8b6c13b-1afa-4892-bca1-1174770fb7a4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.51976ms
    Sep  5 16:09:11.685: INFO: Pod "busybox-privileged-false-b8b6c13b-1afa-4892-bca1-1174770fb7a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018909908s
    Sep  5 16:09:13.690: INFO: Pod "busybox-privileged-false-b8b6c13b-1afa-4892-bca1-1174770fb7a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023936481s
    Sep  5 16:09:13.691: INFO: Pod "busybox-privileged-false-b8b6c13b-1afa-4892-bca1-1174770fb7a4" satisfied condition "Succeeded or Failed"

    Sep  5 16:09:13.707: INFO: Got logs for pod "busybox-privileged-false-b8b6c13b-1afa-4892-bca1-1174770fb7a4": "ip: RTNETLINK answers: Operation not permitted\n"
    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:09:13.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-test-9768" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":272,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:09:19.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-lifecycle-hook-1739" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":291,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] EndpointSlice
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:09:22.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslice-8851" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":17,"skipped":315,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:10:14.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-probe-7882" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":338,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:10:14.430: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename containers
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test override command
    Sep  5 16:10:14.472: INFO: Waiting up to 5m0s for pod "client-containers-251e3032-58fc-4ac1-9f9a-2940359b1d2a" in namespace "containers-2537" to be "Succeeded or Failed"

    Sep  5 16:10:14.478: INFO: Pod "client-containers-251e3032-58fc-4ac1-9f9a-2940359b1d2a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.577321ms
    Sep  5 16:10:16.482: INFO: Pod "client-containers-251e3032-58fc-4ac1-9f9a-2940359b1d2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010292241s
    Sep  5 16:10:18.489: INFO: Pod "client-containers-251e3032-58fc-4ac1-9f9a-2940359b1d2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017141433s
    STEP: Saw pod success
    Sep  5 16:10:18.489: INFO: Pod "client-containers-251e3032-58fc-4ac1-9f9a-2940359b1d2a" satisfied condition "Succeeded or Failed"

    Sep  5 16:10:18.493: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod client-containers-251e3032-58fc-4ac1-9f9a-2940359b1d2a container agnhost-container: <nil>
    STEP: delete the pod
    Sep  5 16:10:18.513: INFO: Waiting for pod client-containers-251e3032-58fc-4ac1-9f9a-2940359b1d2a to disappear
    Sep  5 16:10:18.517: INFO: Pod client-containers-251e3032-58fc-4ac1-9f9a-2940359b1d2a no longer exists
    [AfterEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:10:18.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "containers-2537" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":367,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 29 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:10:26.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-7539" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":-1,"completed":20,"skipped":372,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] PreStop
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:10:35.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "prestop-7869" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":-1,"completed":21,"skipped":389,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:10:52.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-6681" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":22,"skipped":393,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:10:52.236: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep  5 16:10:54.292: INFO: Deleting pod "var-expansion-8802d787-7e42-4c62-bddc-351aad5fc54b" in namespace "var-expansion-4857"
    Sep  5 16:10:54.299: INFO: Wait up to 5m0s for pod "var-expansion-8802d787-7e42-4c62-bddc-351aad5fc54b" to be fully deleted
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:10:56.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-4857" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":-1,"completed":23,"skipped":426,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-configmap-md25
    STEP: Creating a pod to test atomic-volume-subpath
    Sep  5 16:10:56.379: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-md25" in namespace "subpath-4066" to be "Succeeded or Failed"

    Sep  5 16:10:56.384: INFO: Pod "pod-subpath-test-configmap-md25": Phase="Pending", Reason="", readiness=false. Elapsed: 5.077228ms
    Sep  5 16:10:58.390: INFO: Pod "pod-subpath-test-configmap-md25": Phase="Running", Reason="", readiness=true. Elapsed: 2.01080901s
    Sep  5 16:11:00.396: INFO: Pod "pod-subpath-test-configmap-md25": Phase="Running", Reason="", readiness=true. Elapsed: 4.017291797s
    Sep  5 16:11:02.400: INFO: Pod "pod-subpath-test-configmap-md25": Phase="Running", Reason="", readiness=true. Elapsed: 6.020678671s
    Sep  5 16:11:04.405: INFO: Pod "pod-subpath-test-configmap-md25": Phase="Running", Reason="", readiness=true. Elapsed: 8.026083414s
    Sep  5 16:11:06.411: INFO: Pod "pod-subpath-test-configmap-md25": Phase="Running", Reason="", readiness=true. Elapsed: 10.031528925s
... skipping 2 lines ...
    Sep  5 16:11:12.424: INFO: Pod "pod-subpath-test-configmap-md25": Phase="Running", Reason="", readiness=true. Elapsed: 16.045212399s
    Sep  5 16:11:14.431: INFO: Pod "pod-subpath-test-configmap-md25": Phase="Running", Reason="", readiness=true. Elapsed: 18.052388274s
    Sep  5 16:11:16.437: INFO: Pod "pod-subpath-test-configmap-md25": Phase="Running", Reason="", readiness=true. Elapsed: 20.057935781s
    Sep  5 16:11:18.442: INFO: Pod "pod-subpath-test-configmap-md25": Phase="Running", Reason="", readiness=false. Elapsed: 22.06316797s
    Sep  5 16:11:20.449: INFO: Pod "pod-subpath-test-configmap-md25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.069587321s
    STEP: Saw pod success
    Sep  5 16:11:20.449: INFO: Pod "pod-subpath-test-configmap-md25" satisfied condition "Succeeded or Failed"

    Sep  5 16:11:20.453: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-md-0-85nn6-54859d8bd4-vcz8p pod pod-subpath-test-configmap-md25 container test-container-subpath-configmap-md25: <nil>
    STEP: delete the pod
    Sep  5 16:11:20.489: INFO: Waiting for pod pod-subpath-test-configmap-md25 to disappear
    Sep  5 16:11:20.493: INFO: Pod pod-subpath-test-configmap-md25 no longer exists
    STEP: Deleting pod pod-subpath-test-configmap-md25
    Sep  5 16:11:20.493: INFO: Deleting pod "pod-subpath-test-configmap-md25" in namespace "subpath-4066"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:11:20.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-4066" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":24,"skipped":430,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:11:20.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-81" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":-1,"completed":25,"skipped":438,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:11:20.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-1918" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":26,"skipped":458,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:11:57.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-3459" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":-1,"completed":27,"skipped":511,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-instrumentation] Events API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:11:58.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-4462" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":28,"skipped":537,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:11:58.076: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0666 on tmpfs
    Sep  5 16:11:58.112: INFO: Waiting up to 5m0s for pod "pod-420b989f-cfc8-46e5-bb70-eff04650c93e" in namespace "emptydir-1272" to be "Succeeded or Failed"

    Sep  5 16:11:58.116: INFO: Pod "pod-420b989f-cfc8-46e5-bb70-eff04650c93e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.462594ms
    Sep  5 16:12:00.121: INFO: Pod "pod-420b989f-cfc8-46e5-bb70-eff04650c93e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008480412s
    Sep  5 16:12:02.126: INFO: Pod "pod-420b989f-cfc8-46e5-bb70-eff04650c93e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013410214s
    STEP: Saw pod success
    Sep  5 16:12:02.126: INFO: Pod "pod-420b989f-cfc8-46e5-bb70-eff04650c93e" satisfied condition "Succeeded or Failed"

    Sep  5 16:12:02.129: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod pod-420b989f-cfc8-46e5-bb70-eff04650c93e container test-container: <nil>
    STEP: delete the pod
    Sep  5 16:12:02.153: INFO: Waiting for pod pod-420b989f-cfc8-46e5-bb70-eff04650c93e to disappear
    Sep  5 16:12:02.157: INFO: Pod pod-420b989f-cfc8-46e5-bb70-eff04650c93e no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:12:02.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-1272" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":563,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir wrapper volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:12:04.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-wrapper-3807" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":30,"skipped":665,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-network] HostPort
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 29 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:12:19.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "hostport-641" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":31,"skipped":668,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:12:35.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-5319" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":32,"skipped":670,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
    STEP: Destroying namespace "webhook-6266-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":33,"skipped":674,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 106 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:13:40.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-3541" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":-1,"completed":34,"skipped":679,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:13:51.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-5437" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":35,"skipped":696,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
    STEP: Destroying namespace "services-5853" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":36,"skipped":724,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:13:55.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-4567" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":-1,"completed":37,"skipped":736,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 8 lines ...
    
    STEP: creating a pod to probe DNS
    STEP: submitting the pod to kubernetes
    STEP: retrieving the pod
    STEP: looking for the results for each expected name from probers
    Sep  5 16:12:31.026: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-2971/dns-test-064d8b7d-e23f-4399-87bc-6234c3877b93: the server is currently unable to handle the request (get pods dns-test-064d8b7d-e23f-4399-87bc-6234c3877b93)
    Sep  5 16:13:56.852: FAIL: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-2971/dns-test-064d8b7d-e23f-4399-87bc-6234c3877b93: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-2971/pods/dns-test-064d8b7d-e23f-4399-87bc-6234c3877b93/proxy/results/wheezy_tcp@kubernetes.default.svc.cluster.local": context deadline exceeded

    
    Full Stack Trace
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x78de4a8, 0xc000124010, 0x7f4f92dca5b8, 0x18, 0xc00169ffc8)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x78de4a8, 0xc000124010, 0xc001e4f600, 0x2a14500, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f
... skipping 17 lines ...
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b
    testing.tRunner(0xc000cc8000, 0x729a2d8)
    	/usr/local/go/src/testing/testing.go:1203 +0xe5
    created by testing.(*T).Run
    	/usr/local/go/src/testing/testing.go:1248 +0x2b3
    E0905 16:13:56.852766      18 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Sep  5 16:13:56.852: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-2971/dns-test-064d8b7d-e23f-4399-87bc-6234c3877b93: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-2971/pods/dns-test-064d8b7d-e23f-4399-87bc-6234c3877b93/proxy/results/wheezy_tcp@kubernetes.default.svc.cluster.local\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:217, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x78de4a8, 0xc000124010, 0x7f4f92dca5b8, 0x18, 0xc00169ffc8)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x78de4a8, 0xc000124010, 0xc001e4f600, 0x2a14500, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll(0x78de4a8, 0xc000124010, 0xc00169ff01, 0xc00169ffc8, 0xc001e4f600, 0x6826620, 0xc001e4f600)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:577 +0xe5\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext(0x78de4a8, 0xc000124010, 0x12a05f200, 0x8bb2c97000, 0xc001e4f600, 0x6d6e4e0, 0x2521201)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:523 +0x66\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc000aee8c0, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:509 +0x6f\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc00351ab80, 0x8, 0x8, 0x702fe9b, 0x7, 0xc00186b000, 0x7971668, 0xc001558b00, 0x0, 0x0, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463 +0x13c\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:457\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000f706e0, 0xc00186b000, 0xc00351ab80, 0x8, 0x8)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:520 +0x365\nk8s.io/kubernetes/test/e2e/network.glob..func2.1()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:64 +0x58a\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc000cc8000)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc000cc8000)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b\ntesting.tRunner(0xc000cc8000, 0x729a2d8)\n\t/usr/local/go/src/testing/testing.go:1203 +0xe5\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1248 +0x2b3"} (
    Your test failed.

    Ginkgo panics to prevent subsequent assertions from running.
    Normally Ginkgo rescues this panic so you shouldn't see it.
    
    But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
    To circumvent this, you should call
    
... skipping 5 lines ...
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6bbe4c0, 0xc0042061c0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
    panic(0x6bbe4c0, 0xc0042061c0)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc000568580, 0x159, 0x88abe86, 0x7d, 0xd9, 0xc00020f400, 0xa87)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
    panic(0x62ef260, 0x77956f0)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc000568580, 0x159, 0xc0032f96d8, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:260 +0xc8
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc000568580, 0x159, 0xc0032f97c0, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
    k8s.io/kubernetes/test/e2e/framework.Failf(0x70d3e4f, 0x24, 0xc0032f9a20, 0x4, 0x4)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
    k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:480 +0xab1
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x78de4a8, 0xc000124010, 0x7f4f92dca5b8, 0x18, 0xc00169ffc8)
... skipping 68 lines ...
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-9ce3c7ee-7794-4594-afce-47046604ae78
    STEP: Creating a pod to test consume configMaps
    Sep  5 16:13:55.503: INFO: Waiting up to 5m0s for pod "pod-configmaps-9eb8cbb6-c1fb-4ec5-a46f-c944a077763a" in namespace "configmap-2319" to be "Succeeded or Failed"

    Sep  5 16:13:55.507: INFO: Pod "pod-configmaps-9eb8cbb6-c1fb-4ec5-a46f-c944a077763a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.265173ms
    Sep  5 16:13:57.512: INFO: Pod "pod-configmaps-9eb8cbb6-c1fb-4ec5-a46f-c944a077763a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008828438s
    Sep  5 16:13:59.518: INFO: Pod "pod-configmaps-9eb8cbb6-c1fb-4ec5-a46f-c944a077763a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014139444s
    STEP: Saw pod success
    Sep  5 16:13:59.518: INFO: Pod "pod-configmaps-9eb8cbb6-c1fb-4ec5-a46f-c944a077763a" satisfied condition "Succeeded or Failed"

    Sep  5 16:13:59.522: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod pod-configmaps-9eb8cbb6-c1fb-4ec5-a46f-c944a077763a container agnhost-container: <nil>
    STEP: delete the pod
    Sep  5 16:13:59.552: INFO: Waiting for pod pod-configmaps-9eb8cbb6-c1fb-4ec5-a46f-c944a077763a to disappear
    Sep  5 16:13:59.555: INFO: Pod pod-configmaps-9eb8cbb6-c1fb-4ec5-a46f-c944a077763a no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:13:59.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-2319" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":755,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  5 16:13:59.634: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e5038bb7-6222-4828-990c-1b366f0c4a33" in namespace "projected-1647" to be "Succeeded or Failed"

    Sep  5 16:13:59.638: INFO: Pod "downwardapi-volume-e5038bb7-6222-4828-990c-1b366f0c4a33": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100891ms
    Sep  5 16:14:01.643: INFO: Pod "downwardapi-volume-e5038bb7-6222-4828-990c-1b366f0c4a33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009232233s
    Sep  5 16:14:03.653: INFO: Pod "downwardapi-volume-e5038bb7-6222-4828-990c-1b366f0c4a33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019739279s
    STEP: Saw pod success
    Sep  5 16:14:03.653: INFO: Pod "downwardapi-volume-e5038bb7-6222-4828-990c-1b366f0c4a33" satisfied condition "Succeeded or Failed"

    Sep  5 16:14:03.658: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod downwardapi-volume-e5038bb7-6222-4828-990c-1b366f0c4a33 container client-container: <nil>
    STEP: delete the pod
    Sep  5 16:14:03.677: INFO: Waiting for pod downwardapi-volume-e5038bb7-6222-4828-990c-1b366f0c4a33 to disappear
    Sep  5 16:14:03.680: INFO: Pod downwardapi-volume-e5038bb7-6222-4828-990c-1b366f0c4a33 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:14:03.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-1647" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":768,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:14:03.703: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should allow substituting values in a container's command [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test substitution in container's command
    Sep  5 16:14:03.743: INFO: Waiting up to 5m0s for pod "var-expansion-14ab6f33-8341-4633-b5c8-e651bc99bda2" in namespace "var-expansion-6675" to be "Succeeded or Failed"

    Sep  5 16:14:03.747: INFO: Pod "var-expansion-14ab6f33-8341-4633-b5c8-e651bc99bda2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.451491ms
    Sep  5 16:14:05.751: INFO: Pod "var-expansion-14ab6f33-8341-4633-b5c8-e651bc99bda2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007692068s
    Sep  5 16:14:07.756: INFO: Pod "var-expansion-14ab6f33-8341-4633-b5c8-e651bc99bda2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012645068s
    STEP: Saw pod success
    Sep  5 16:14:07.756: INFO: Pod "var-expansion-14ab6f33-8341-4633-b5c8-e651bc99bda2" satisfied condition "Succeeded or Failed"

    Sep  5 16:14:07.759: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod var-expansion-14ab6f33-8341-4633-b5c8-e651bc99bda2 container dapi-container: <nil>
    STEP: delete the pod
    Sep  5 16:14:07.775: INFO: Waiting for pod var-expansion-14ab6f33-8341-4633-b5c8-e651bc99bda2 to disappear
    Sep  5 16:14:07.778: INFO: Pod var-expansion-14ab6f33-8341-4633-b5c8-e651bc99bda2 no longer exists
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:14:07.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-6675" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":773,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:14:07.791: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-d51e2379-7df2-4445-abf8-909de5a463e7
    STEP: Creating a pod to test consume secrets
    Sep  5 16:14:07.837: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fd30ee9e-b912-4092-a561-2d9700c73999" in namespace "projected-7662" to be "Succeeded or Failed"

    Sep  5 16:14:07.840: INFO: Pod "pod-projected-secrets-fd30ee9e-b912-4092-a561-2d9700c73999": Phase="Pending", Reason="", readiness=false. Elapsed: 2.76518ms
    Sep  5 16:14:09.844: INFO: Pod "pod-projected-secrets-fd30ee9e-b912-4092-a561-2d9700c73999": Phase="Running", Reason="", readiness=false. Elapsed: 2.006909267s
    Sep  5 16:14:11.850: INFO: Pod "pod-projected-secrets-fd30ee9e-b912-4092-a561-2d9700c73999": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012175172s
    STEP: Saw pod success
    Sep  5 16:14:11.850: INFO: Pod "pod-projected-secrets-fd30ee9e-b912-4092-a561-2d9700c73999" satisfied condition "Succeeded or Failed"

    Sep  5 16:14:11.853: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod pod-projected-secrets-fd30ee9e-b912-4092-a561-2d9700c73999 container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep  5 16:14:11.870: INFO: Waiting for pod pod-projected-secrets-fd30ee9e-b912-4092-a561-2d9700c73999 to disappear
    Sep  5 16:14:11.874: INFO: Pod pod-projected-secrets-fd30ee9e-b912-4092-a561-2d9700c73999 no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:14:11.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-7662" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":41,"skipped":776,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:14:11.890: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename kubectl
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 119 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:14:27.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-3580" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":-1,"completed":42,"skipped":776,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:14:27.293: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir volume type on tmpfs
    Sep  5 16:14:27.332: INFO: Waiting up to 5m0s for pod "pod-828b5cf7-edcd-4c8e-8107-0c3c81d71acd" in namespace "emptydir-3949" to be "Succeeded or Failed"

    Sep  5 16:14:27.335: INFO: Pod "pod-828b5cf7-edcd-4c8e-8107-0c3c81d71acd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.984022ms
    Sep  5 16:14:29.340: INFO: Pod "pod-828b5cf7-edcd-4c8e-8107-0c3c81d71acd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007270662s
    Sep  5 16:14:31.344: INFO: Pod "pod-828b5cf7-edcd-4c8e-8107-0c3c81d71acd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011301245s
    STEP: Saw pod success
    Sep  5 16:14:31.344: INFO: Pod "pod-828b5cf7-edcd-4c8e-8107-0c3c81d71acd" satisfied condition "Succeeded or Failed"

    Sep  5 16:14:31.347: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-md-0-85nn6-54859d8bd4-grv26 pod pod-828b5cf7-edcd-4c8e-8107-0c3c81d71acd container test-container: <nil>
    STEP: delete the pod
    Sep  5 16:14:31.370: INFO: Waiting for pod pod-828b5cf7-edcd-4c8e-8107-0c3c81d71acd to disappear
    Sep  5 16:14:31.373: INFO: Pod pod-828b5cf7-edcd-4c8e-8107-0c3c81d71acd no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:14:31.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-3949" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":791,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:14:35.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-8782" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":813,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 42 lines ...
    STEP: Destroying namespace "services-1745" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":45,"skipped":820,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] DisruptionController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 16 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:14:52.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "disruption-3940" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":46,"skipped":832,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:14:52.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-9383" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":47,"skipped":866,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:14:52.592: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename kubectl
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 45 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:14:55.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-721" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":-1,"completed":48,"skipped":866,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:14:55.533: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-map-e9e7f1f3-e6b8-42fa-9227-470954a014a7
    STEP: Creating a pod to test consume secrets
    Sep  5 16:14:55.596: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4b6843e5-ddf4-454d-b4a6-29acd6300915" in namespace "projected-5901" to be "Succeeded or Failed"

    Sep  5 16:14:55.601: INFO: Pod "pod-projected-secrets-4b6843e5-ddf4-454d-b4a6-29acd6300915": Phase="Pending", Reason="", readiness=false. Elapsed: 5.064661ms
    Sep  5 16:14:57.607: INFO: Pod "pod-projected-secrets-4b6843e5-ddf4-454d-b4a6-29acd6300915": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011304059s
    Sep  5 16:14:59.613: INFO: Pod "pod-projected-secrets-4b6843e5-ddf4-454d-b4a6-29acd6300915": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016749576s
    STEP: Saw pod success
    Sep  5 16:14:59.613: INFO: Pod "pod-projected-secrets-4b6843e5-ddf4-454d-b4a6-29acd6300915" satisfied condition "Succeeded or Failed"

    Sep  5 16:14:59.619: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod pod-projected-secrets-4b6843e5-ddf4-454d-b4a6-29acd6300915 container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep  5 16:14:59.642: INFO: Waiting for pod pod-projected-secrets-4b6843e5-ddf4-454d-b4a6-29acd6300915 to disappear
    Sep  5 16:14:59.645: INFO: Pod pod-projected-secrets-4b6843e5-ddf4-454d-b4a6-29acd6300915 no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:14:59.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-5901" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":49,"skipped":867,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:15:06.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-2617" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":50,"skipped":873,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
    STEP: Looking for a node to schedule stateful set and pod
    STEP: Creating pod with conflicting port in namespace statefulset-7664
    STEP: Waiting until pod test-pod will start running in namespace statefulset-7664
    STEP: Creating statefulset with conflicting port in namespace statefulset-7664
    STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7664
    Sep  5 16:15:08.415: INFO: Observed stateful pod in namespace: statefulset-7664, name: ss-0, uid: 5a5a0770-7072-46cd-bc77-eea43f2000b1, status phase: Pending. Waiting for statefulset controller to delete.
    Sep  5 16:15:08.430: INFO: Observed stateful pod in namespace: statefulset-7664, name: ss-0, uid: 5a5a0770-7072-46cd-bc77-eea43f2000b1, status phase: Failed. Waiting for statefulset controller to delete.

    Sep  5 16:15:08.458: INFO: Observed stateful pod in namespace: statefulset-7664, name: ss-0, uid: 5a5a0770-7072-46cd-bc77-eea43f2000b1, status phase: Failed. Waiting for statefulset controller to delete.

    Sep  5 16:15:08.464: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7664
    STEP: Removing pod with conflicting port in namespace statefulset-7664
    STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7664 and will be in running state
    [AfterEach] Basic StatefulSet functionality [StatefulSetBasic]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118
    Sep  5 16:15:10.502: INFO: Deleting all statefulset in ns statefulset-7664
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:15:20.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-7664" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":51,"skipped":903,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:15:20.601: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0777 on tmpfs
    Sep  5 16:15:20.638: INFO: Waiting up to 5m0s for pod "pod-aa1a3673-dc00-4b55-ab18-c5e5c138426e" in namespace "emptydir-4895" to be "Succeeded or Failed"

    Sep  5 16:15:20.641: INFO: Pod "pod-aa1a3673-dc00-4b55-ab18-c5e5c138426e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.949207ms
    Sep  5 16:15:22.645: INFO: Pod "pod-aa1a3673-dc00-4b55-ab18-c5e5c138426e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00696093s
    Sep  5 16:15:24.651: INFO: Pod "pod-aa1a3673-dc00-4b55-ab18-c5e5c138426e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013207431s
    STEP: Saw pod success
    Sep  5 16:15:24.651: INFO: Pod "pod-aa1a3673-dc00-4b55-ab18-c5e5c138426e" satisfied condition "Succeeded or Failed"

    Sep  5 16:15:24.654: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod pod-aa1a3673-dc00-4b55-ab18-c5e5c138426e container test-container: <nil>
    STEP: delete the pod
    Sep  5 16:15:24.668: INFO: Waiting for pod pod-aa1a3673-dc00-4b55-ab18-c5e5c138426e to disappear
    Sep  5 16:15:24.671: INFO: Pod pod-aa1a3673-dc00-4b55-ab18-c5e5c138426e no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:15:24.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-4895" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":52,"skipped":929,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:15:24.704: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep  5 16:15:24.747: INFO: Waiting up to 5m0s for pod "downward-api-6cb34dbd-a006-47a0-81f2-b1d21eba028d" in namespace "downward-api-894" to be "Succeeded or Failed"

    Sep  5 16:15:24.751: INFO: Pod "downward-api-6cb34dbd-a006-47a0-81f2-b1d21eba028d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.56555ms
    Sep  5 16:15:26.756: INFO: Pod "downward-api-6cb34dbd-a006-47a0-81f2-b1d21eba028d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007983538s
    Sep  5 16:15:28.759: INFO: Pod "downward-api-6cb34dbd-a006-47a0-81f2-b1d21eba028d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011652713s
    STEP: Saw pod success
    Sep  5 16:15:28.759: INFO: Pod "downward-api-6cb34dbd-a006-47a0-81f2-b1d21eba028d" satisfied condition "Succeeded or Failed"

    Sep  5 16:15:28.762: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod downward-api-6cb34dbd-a006-47a0-81f2-b1d21eba028d container dapi-container: <nil>
    STEP: delete the pod
    Sep  5 16:15:28.779: INFO: Waiting for pod downward-api-6cb34dbd-a006-47a0-81f2-b1d21eba028d to disappear
    Sep  5 16:15:28.782: INFO: Pod downward-api-6cb34dbd-a006-47a0-81f2-b1d21eba028d no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:15:28.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-894" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":53,"skipped":943,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:15:28.856: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep  5 16:15:30.907: INFO: Deleting pod "var-expansion-9e4f44bb-386a-411d-8737-e31f05b1c1b0" in namespace "var-expansion-3532"
    Sep  5 16:15:30.912: INFO: Wait up to 5m0s for pod "var-expansion-9e4f44bb-386a-411d-8737-e31f05b1c1b0" to be fully deleted
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:15:32.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-3532" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":-1,"completed":54,"skipped":983,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:15:32.941: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-map-32ff14f9-ef50-4220-9c8d-c6bb0cb5e14f
    STEP: Creating a pod to test consume configMaps
    Sep  5 16:15:32.984: INFO: Waiting up to 5m0s for pod "pod-configmaps-a95a459c-0c36-4e7e-81a6-a0ec75a94c76" in namespace "configmap-5464" to be "Succeeded or Failed"

    Sep  5 16:15:32.990: INFO: Pod "pod-configmaps-a95a459c-0c36-4e7e-81a6-a0ec75a94c76": Phase="Pending", Reason="", readiness=false. Elapsed: 5.325229ms
    Sep  5 16:15:34.995: INFO: Pod "pod-configmaps-a95a459c-0c36-4e7e-81a6-a0ec75a94c76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010679422s
    Sep  5 16:15:36.999: INFO: Pod "pod-configmaps-a95a459c-0c36-4e7e-81a6-a0ec75a94c76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014605231s
    STEP: Saw pod success
    Sep  5 16:15:36.999: INFO: Pod "pod-configmaps-a95a459c-0c36-4e7e-81a6-a0ec75a94c76" satisfied condition "Succeeded or Failed"

    Sep  5 16:15:37.002: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod pod-configmaps-a95a459c-0c36-4e7e-81a6-a0ec75a94c76 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  5 16:15:37.016: INFO: Waiting for pod pod-configmaps-a95a459c-0c36-4e7e-81a6-a0ec75a94c76 to disappear
    Sep  5 16:15:37.020: INFO: Pod pod-configmaps-a95a459c-0c36-4e7e-81a6-a0ec75a94c76 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:15:37.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-5464" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":55,"skipped":990,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:15:38.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-9499" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":56,"skipped":991,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  5 16:15:38.265: INFO: Waiting up to 5m0s for pod "downwardapi-volume-68b266a5-f2b8-4e96-84f7-6ebba4eaeabf" in namespace "projected-7159" to be "Succeeded or Failed"

    Sep  5 16:15:38.269: INFO: Pod "downwardapi-volume-68b266a5-f2b8-4e96-84f7-6ebba4eaeabf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.739587ms
    Sep  5 16:15:40.274: INFO: Pod "downwardapi-volume-68b266a5-f2b8-4e96-84f7-6ebba4eaeabf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008926028s
    Sep  5 16:15:42.281: INFO: Pod "downwardapi-volume-68b266a5-f2b8-4e96-84f7-6ebba4eaeabf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015786205s
    STEP: Saw pod success
    Sep  5 16:15:42.281: INFO: Pod "downwardapi-volume-68b266a5-f2b8-4e96-84f7-6ebba4eaeabf" satisfied condition "Succeeded or Failed"

    Sep  5 16:15:42.285: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod downwardapi-volume-68b266a5-f2b8-4e96-84f7-6ebba4eaeabf container client-container: <nil>
    STEP: delete the pod
    Sep  5 16:15:42.312: INFO: Waiting for pod downwardapi-volume-68b266a5-f2b8-4e96-84f7-6ebba4eaeabf to disappear
    Sep  5 16:15:42.315: INFO: Pod downwardapi-volume-68b266a5-f2b8-4e96-84f7-6ebba4eaeabf no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:15:42.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-7159" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":57,"skipped":1065,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:15:42.345: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-a3605344-a299-459a-b5aa-968ec9bf0ad6
    STEP: Creating a pod to test consume configMaps
    Sep  5 16:15:42.396: INFO: Waiting up to 5m0s for pod "pod-configmaps-466e72e8-2d38-41d6-846a-92b754850a15" in namespace "configmap-1889" to be "Succeeded or Failed"

    Sep  5 16:15:42.401: INFO: Pod "pod-configmaps-466e72e8-2d38-41d6-846a-92b754850a15": Phase="Pending", Reason="", readiness=false. Elapsed: 4.295544ms
    Sep  5 16:15:44.406: INFO: Pod "pod-configmaps-466e72e8-2d38-41d6-846a-92b754850a15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009787309s
    Sep  5 16:15:46.412: INFO: Pod "pod-configmaps-466e72e8-2d38-41d6-846a-92b754850a15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015420198s
    STEP: Saw pod success
    Sep  5 16:15:46.412: INFO: Pod "pod-configmaps-466e72e8-2d38-41d6-846a-92b754850a15" satisfied condition "Succeeded or Failed"

    Sep  5 16:15:46.415: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod pod-configmaps-466e72e8-2d38-41d6-846a-92b754850a15 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  5 16:15:46.435: INFO: Waiting for pod pod-configmaps-466e72e8-2d38-41d6-846a-92b754850a15 to disappear
    Sep  5 16:15:46.439: INFO: Pod pod-configmaps-466e72e8-2d38-41d6-846a-92b754850a15 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:15:46.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-1889" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":58,"skipped":1076,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 41 lines ...
    STEP: Destroying namespace "services-7494" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":59,"skipped":1086,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 3 lines ...
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752
    [It] should serve a basic endpoint from pods  [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating service endpoint-test2 in namespace services-4388
    STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4388 to expose endpoints map[]
    Sep  5 16:15:58.086: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found

    Sep  5 16:15:59.095: INFO: successfully validated that service endpoint-test2 in namespace services-4388 exposes endpoints map[]
    STEP: Creating pod pod1 in namespace services-4388
    Sep  5 16:15:59.105: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true)
    Sep  5 16:16:01.109: INFO: The status of Pod pod1 is Running (Ready = true)
    STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4388 to expose endpoints map[pod1:[80]]
    Sep  5 16:16:01.122: INFO: successfully validated that service endpoint-test2 in namespace services-4388 exposes endpoints map[pod1:[80]]
... skipping 36 lines ...
    STEP: Destroying namespace "services-4388" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":-1,"completed":60,"skipped":1090,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] DisruptionController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:16:15.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "disruption-9195" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":61,"skipped":1116,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 16 lines ...
    Sep  5 16:16:17.716: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:17.720: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:17.731: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:17.735: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:17.739: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:17.743: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:17.750: INFO: Lookups using dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1022.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1022.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local jessie_udp@dns-test-service-2.dns-1022.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1022.svc.cluster.local]

    
    Sep  5 16:16:22.756: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:22.761: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:22.765: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:22.770: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:22.783: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:22.786: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:22.790: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:22.794: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:22.802: INFO: Lookups using dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1022.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1022.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local jessie_udp@dns-test-service-2.dns-1022.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1022.svc.cluster.local]

    
    Sep  5 16:16:27.756: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:27.759: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:27.766: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:27.770: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:27.780: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:27.783: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:27.786: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:27.789: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:27.796: INFO: Lookups using dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1022.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1022.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local jessie_udp@dns-test-service-2.dns-1022.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1022.svc.cluster.local]

    
    Sep  5 16:16:32.755: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:32.758: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:32.762: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:32.765: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:32.775: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:32.779: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:32.782: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:32.787: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:32.794: INFO: Lookups using dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1022.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1022.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local jessie_udp@dns-test-service-2.dns-1022.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1022.svc.cluster.local]

    
    Sep  5 16:16:37.755: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:37.759: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:37.762: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:37.766: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:37.775: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:37.778: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:37.781: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:37.785: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:37.793: INFO: Lookups using dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1022.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1022.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local jessie_udp@dns-test-service-2.dns-1022.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1022.svc.cluster.local]

    
    Sep  5 16:16:42.755: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:42.759: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:42.763: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:42.767: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:42.778: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:42.782: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:42.786: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:42.790: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1022.svc.cluster.local from pod dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6: the server could not find the requested resource (get pods dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6)
    Sep  5 16:16:42.798: INFO: Lookups using dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1022.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1022.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1022.svc.cluster.local jessie_udp@dns-test-service-2.dns-1022.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1022.svc.cluster.local]

    
    Sep  5 16:16:47.790: INFO: DNS probes using dns-1022/dns-test-caa01b3e-69e3-49cf-a2e3-ed20de711ae6 succeeded
    
    STEP: deleting the pod
    STEP: deleting the test headless service
    [AfterEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:16:47.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-1022" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":62,"skipped":1121,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:17:01.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-3983" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":63,"skipped":1130,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 34 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:17:21.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-5457" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","total":-1,"completed":64,"skipped":1158,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 65 lines ...
    STEP: Destroying namespace "services-279" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":65,"skipped":1159,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 22 lines ...
    STEP: Destroying namespace "webhook-6495-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":66,"skipped":1189,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:18:28.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-9166" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":67,"skipped":1192,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 37 lines ...
    Sep  5 16:18:33.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-2049 explain e2e-test-crd-publish-openapi-4697-crds.spec'
    Sep  5 16:18:33.711: INFO: stderr: ""
    Sep  5 16:18:33.711: INFO: stdout: "KIND:     e2e-test-crd-publish-openapi-4697-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
    Sep  5 16:18:33.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-2049 explain e2e-test-crd-publish-openapi-4697-crds.spec.bars'
    Sep  5 16:18:33.917: INFO: stderr: ""
    Sep  5 16:18:33.917: INFO: stdout: "KIND:     e2e-test-crd-publish-openapi-4697-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
    STEP: kubectl explain works to return error when explain is called on property that doesn't exist

    Sep  5 16:18:33.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-2049 explain e2e-test-crd-publish-openapi-4697-crds.spec.bars2'
    Sep  5 16:18:34.157: INFO: rc: 1
    [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:18:36.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-2049" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":68,"skipped":1213,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:18:36.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-2157" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":69,"skipped":1257,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:18:36.711: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename init-container
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
    [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating the pod
    Sep  5 16:18:36.747: INFO: PodSpec: initContainers in spec.initContainers
    [AfterEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:18:41.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "init-container-7962" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":70,"skipped":1291,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 3 lines ...
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752
    [It] should serve multiport endpoints from pods  [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating service multi-endpoint-test in namespace services-522
    STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-522 to expose endpoints map[]
    Sep  5 16:18:41.917: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found

    Sep  5 16:18:42.927: INFO: successfully validated that service multi-endpoint-test in namespace services-522 exposes endpoints map[]
    STEP: Creating pod pod1 in namespace services-522
    Sep  5 16:18:42.940: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true)
    Sep  5 16:18:44.945: INFO: The status of Pod pod1 is Running (Ready = true)
    STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-522 to expose endpoints map[pod1:[100]]
    Sep  5 16:18:44.960: INFO: successfully validated that service multi-endpoint-test in namespace services-522 exposes endpoints map[pod1:[100]]
... skipping 28 lines ...
    STEP: Destroying namespace "services-522" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":-1,"completed":71,"skipped":1324,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSS
    ------------------------------
    {"msg":"FAILED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":-1,"completed":68,"skipped":1095,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:13:56.883: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename dns
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 5 lines ...
    
    STEP: creating a pod to probe DNS
    STEP: submitting the pod to kubernetes
    STEP: retrieving the pod
    STEP: looking for the results for each expected name from probers
    Sep  5 16:17:32.081: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-2924/dns-test-e463bdf2-2416-47ea-995d-de53ea0f7f85: the server is currently unable to handle the request (get pods dns-test-e463bdf2-2416-47ea-995d-de53ea0f7f85)
    Sep  5 16:18:58.961: FAIL: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-2924/dns-test-e463bdf2-2416-47ea-995d-de53ea0f7f85: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-2924/pods/dns-test-e463bdf2-2416-47ea-995d-de53ea0f7f85/proxy/results/wheezy_tcp@kubernetes.default.svc.cluster.local": context deadline exceeded

    
    Full Stack Trace
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x78de4a8, 0xc000124010, 0x7f4f92dcaf18, 0x18, 0xc0039266d8)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x78de4a8, 0xc000124010, 0xc002305230, 0x2a14500, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f
... skipping 17 lines ...
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b
    testing.tRunner(0xc000cc8000, 0x729a2d8)
    	/usr/local/go/src/testing/testing.go:1203 +0xe5
    created by testing.(*T).Run
    	/usr/local/go/src/testing/testing.go:1248 +0x2b3
    E0905 16:18:58.962145      18 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Sep  5 16:18:58.961: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-2924/dns-test-e463bdf2-2416-47ea-995d-de53ea0f7f85: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-2924/pods/dns-test-e463bdf2-2416-47ea-995d-de53ea0f7f85/proxy/results/wheezy_tcp@kubernetes.default.svc.cluster.local\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:217, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x78de4a8, 0xc000124010, 0x7f4f92dcaf18, 0x18, 0xc0039266d8)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x78de4a8, 0xc000124010, 0xc002305230, 0x2a14500, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll(0x78de4a8, 0xc000124010, 0xc003926601, 0xc0039266d8, 0xc002305230, 0x6826620, 0xc002305230)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:577 +0xe5\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext(0x78de4a8, 0xc000124010, 0x12a05f200, 0x8bb2c97000, 0xc002305230, 0x6d6e4e0, 0x2521201)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:523 +0x66\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc000aef5e0, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:509 +0x6f\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc002ed6580, 0x8, 0x8, 0x702fe9b, 0x7, 0xc004482c00, 0x7971668, 0xc004060840, 0x0, 0x0, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463 +0x13c\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:457\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000f706e0, 0xc004482c00, 0xc002ed6580, 0x8, 0x8)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:520 +0x365\nk8s.io/kubernetes/test/e2e/network.glob..func2.1()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:64 +0x58a\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc000cc8000)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc000cc8000)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b\ntesting.tRunner(0xc000cc8000, 0x729a2d8)\n\t/usr/local/go/src/testing/testing.go:1203 +0xe5\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1248 +0x2b3"} (
    Your test failed.

    Ginkgo panics to prevent subsequent assertions from running.
    Normally Ginkgo rescues this panic so you shouldn't see it.
    
    But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
    To circumvent this, you should call
    
... skipping 5 lines ...
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6bbe4c0, 0xc0047bcc00)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
    panic(0x6bbe4c0, 0xc0047bcc00)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc000b3bce0, 0x159, 0x88abe86, 0x7d, 0xd9, 0xc00020f400, 0xa87)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
    panic(0x62ef260, 0x77956f0)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc000b3bce0, 0x159, 0xc0017df6d8, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:260 +0xc8
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc000b3bce0, 0x159, 0xc0017df7c0, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
    k8s.io/kubernetes/test/e2e/framework.Failf(0x70d3e4f, 0x24, 0xc0017dfa20, 0x4, 0x4)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
    k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:480 +0xab1
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x78de4a8, 0xc000124010, 0x7f4f92dcaf18, 0x18, 0xc0039266d8)
... skipping 89 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:18:59.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-lifecycle-hook-2719" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":72,"skipped":1327,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:18:59.311: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0644 on node default medium
    Sep  5 16:18:59.357: INFO: Waiting up to 5m0s for pod "pod-e6577007-3f59-481d-9dd6-1aae3aa088c2" in namespace "emptydir-8875" to be "Succeeded or Failed"

    Sep  5 16:18:59.365: INFO: Pod "pod-e6577007-3f59-481d-9dd6-1aae3aa088c2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.046438ms
    Sep  5 16:19:01.370: INFO: Pod "pod-e6577007-3f59-481d-9dd6-1aae3aa088c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012892726s
    Sep  5 16:19:03.375: INFO: Pod "pod-e6577007-3f59-481d-9dd6-1aae3aa088c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018132708s
    STEP: Saw pod success
    Sep  5 16:19:03.375: INFO: Pod "pod-e6577007-3f59-481d-9dd6-1aae3aa088c2" satisfied condition "Succeeded or Failed"

    Sep  5 16:19:03.379: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod pod-e6577007-3f59-481d-9dd6-1aae3aa088c2 container test-container: <nil>
    STEP: delete the pod
    Sep  5 16:19:03.396: INFO: Waiting for pod pod-e6577007-3f59-481d-9dd6-1aae3aa088c2 to disappear
    Sep  5 16:19:03.399: INFO: Pod pod-e6577007-3f59-481d-9dd6-1aae3aa088c2 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:19:03.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-8875" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":73,"skipped":1358,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:19:19.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-7800" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":74,"skipped":1360,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:19:19.152: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0777 on node default medium
    Sep  5 16:19:19.207: INFO: Waiting up to 5m0s for pod "pod-ed54936a-2edc-45e8-89cb-c58cab1f2f1d" in namespace "emptydir-3284" to be "Succeeded or Failed"

    Sep  5 16:19:19.212: INFO: Pod "pod-ed54936a-2edc-45e8-89cb-c58cab1f2f1d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.74465ms
    Sep  5 16:19:21.218: INFO: Pod "pod-ed54936a-2edc-45e8-89cb-c58cab1f2f1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010078167s
    Sep  5 16:19:23.222: INFO: Pod "pod-ed54936a-2edc-45e8-89cb-c58cab1f2f1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014928857s
    STEP: Saw pod success
    Sep  5 16:19:23.223: INFO: Pod "pod-ed54936a-2edc-45e8-89cb-c58cab1f2f1d" satisfied condition "Succeeded or Failed"

    Sep  5 16:19:23.226: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod pod-ed54936a-2edc-45e8-89cb-c58cab1f2f1d container test-container: <nil>
    STEP: delete the pod
    Sep  5 16:19:23.243: INFO: Waiting for pod pod-ed54936a-2edc-45e8-89cb-c58cab1f2f1d to disappear
    Sep  5 16:19:23.245: INFO: Pod pod-ed54936a-2edc-45e8-89cb-c58cab1f2f1d no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:19:23.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-3284" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":75,"skipped":1370,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:19:23.283: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename svcaccounts
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should mount projected service account token [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test service account token: 
    Sep  5 16:19:23.321: INFO: Waiting up to 5m0s for pod "test-pod-2c240fbe-712b-4188-815c-573d1ab03856" in namespace "svcaccounts-4550" to be "Succeeded or Failed"

    Sep  5 16:19:23.325: INFO: Pod "test-pod-2c240fbe-712b-4188-815c-573d1ab03856": Phase="Pending", Reason="", readiness=false. Elapsed: 3.872629ms
    Sep  5 16:19:25.330: INFO: Pod "test-pod-2c240fbe-712b-4188-815c-573d1ab03856": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008708891s
    Sep  5 16:19:27.334: INFO: Pod "test-pod-2c240fbe-712b-4188-815c-573d1ab03856": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012838934s
    STEP: Saw pod success
    Sep  5 16:19:27.334: INFO: Pod "test-pod-2c240fbe-712b-4188-815c-573d1ab03856" satisfied condition "Succeeded or Failed"

    Sep  5 16:19:27.337: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod test-pod-2c240fbe-712b-4188-815c-573d1ab03856 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  5 16:19:27.355: INFO: Waiting for pod test-pod-2c240fbe-712b-4188-815c-573d1ab03856 to disappear
    Sep  5 16:19:27.359: INFO: Pod test-pod-2c240fbe-712b-4188-815c-573d1ab03856 no longer exists
    [AfterEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:19:27.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-4550" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":76,"skipped":1391,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:19:33.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-4647" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":77,"skipped":1416,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with projected pod [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-projected-nv54
    STEP: Creating a pod to test atomic-volume-subpath
    Sep  5 16:19:33.545: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-nv54" in namespace "subpath-4721" to be "Succeeded or Failed"

    Sep  5 16:19:33.549: INFO: Pod "pod-subpath-test-projected-nv54": Phase="Pending", Reason="", readiness=false. Elapsed: 3.638213ms
    Sep  5 16:19:35.553: INFO: Pod "pod-subpath-test-projected-nv54": Phase="Running", Reason="", readiness=true. Elapsed: 2.008073619s
    Sep  5 16:19:37.559: INFO: Pod "pod-subpath-test-projected-nv54": Phase="Running", Reason="", readiness=true. Elapsed: 4.013306844s
    Sep  5 16:19:39.563: INFO: Pod "pod-subpath-test-projected-nv54": Phase="Running", Reason="", readiness=true. Elapsed: 6.017985705s
    Sep  5 16:19:41.573: INFO: Pod "pod-subpath-test-projected-nv54": Phase="Running", Reason="", readiness=true. Elapsed: 8.02817375s
    Sep  5 16:19:43.579: INFO: Pod "pod-subpath-test-projected-nv54": Phase="Running", Reason="", readiness=true. Elapsed: 10.033639517s
... skipping 2 lines ...
    Sep  5 16:19:49.594: INFO: Pod "pod-subpath-test-projected-nv54": Phase="Running", Reason="", readiness=true. Elapsed: 16.049189045s
    Sep  5 16:19:51.599: INFO: Pod "pod-subpath-test-projected-nv54": Phase="Running", Reason="", readiness=true. Elapsed: 18.053986471s
    Sep  5 16:19:53.605: INFO: Pod "pod-subpath-test-projected-nv54": Phase="Running", Reason="", readiness=true. Elapsed: 20.059437872s
    Sep  5 16:19:55.611: INFO: Pod "pod-subpath-test-projected-nv54": Phase="Running", Reason="", readiness=false. Elapsed: 22.065448524s
    Sep  5 16:19:57.615: INFO: Pod "pod-subpath-test-projected-nv54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.069388833s
    STEP: Saw pod success
    Sep  5 16:19:57.615: INFO: Pod "pod-subpath-test-projected-nv54" satisfied condition "Succeeded or Failed"

    Sep  5 16:19:57.618: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod pod-subpath-test-projected-nv54 container test-container-subpath-projected-nv54: <nil>
    STEP: delete the pod
    Sep  5 16:19:57.640: INFO: Waiting for pod pod-subpath-test-projected-nv54 to disappear
    Sep  5 16:19:57.642: INFO: Pod pod-subpath-test-projected-nv54 no longer exists
    STEP: Deleting pod pod-subpath-test-projected-nv54
    Sep  5 16:19:57.643: INFO: Deleting pod "pod-subpath-test-projected-nv54" in namespace "subpath-4721"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:19:57.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-4721" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":78,"skipped":1423,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:19:57.663: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename statefulset
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 37 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:21:07.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-3491" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":79,"skipped":1423,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with downward pod [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-downwardapi-p646
    STEP: Creating a pod to test atomic-volume-subpath
    Sep  5 16:21:08.070: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-p646" in namespace "subpath-5704" to be "Succeeded or Failed"

    Sep  5 16:21:08.074: INFO: Pod "pod-subpath-test-downwardapi-p646": Phase="Pending", Reason="", readiness=false. Elapsed: 3.395735ms
    Sep  5 16:21:10.079: INFO: Pod "pod-subpath-test-downwardapi-p646": Phase="Running", Reason="", readiness=true. Elapsed: 2.008307293s
    Sep  5 16:21:12.083: INFO: Pod "pod-subpath-test-downwardapi-p646": Phase="Running", Reason="", readiness=true. Elapsed: 4.012471454s
    Sep  5 16:21:14.088: INFO: Pod "pod-subpath-test-downwardapi-p646": Phase="Running", Reason="", readiness=true. Elapsed: 6.017279406s
    Sep  5 16:21:16.092: INFO: Pod "pod-subpath-test-downwardapi-p646": Phase="Running", Reason="", readiness=true. Elapsed: 8.022011855s
    Sep  5 16:21:18.097: INFO: Pod "pod-subpath-test-downwardapi-p646": Phase="Running", Reason="", readiness=true. Elapsed: 10.026529265s
... skipping 2 lines ...
    Sep  5 16:21:24.110: INFO: Pod "pod-subpath-test-downwardapi-p646": Phase="Running", Reason="", readiness=true. Elapsed: 16.039822571s
    Sep  5 16:21:26.115: INFO: Pod "pod-subpath-test-downwardapi-p646": Phase="Running", Reason="", readiness=true. Elapsed: 18.045067986s
    Sep  5 16:21:28.120: INFO: Pod "pod-subpath-test-downwardapi-p646": Phase="Running", Reason="", readiness=true. Elapsed: 20.050071113s
    Sep  5 16:21:30.125: INFO: Pod "pod-subpath-test-downwardapi-p646": Phase="Running", Reason="", readiness=false. Elapsed: 22.054345315s
    Sep  5 16:21:32.130: INFO: Pod "pod-subpath-test-downwardapi-p646": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.059234052s
    STEP: Saw pod success
    Sep  5 16:21:32.130: INFO: Pod "pod-subpath-test-downwardapi-p646" satisfied condition "Succeeded or Failed"

    Sep  5 16:21:32.135: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod pod-subpath-test-downwardapi-p646 container test-container-subpath-downwardapi-p646: <nil>
    STEP: delete the pod
    Sep  5 16:21:32.163: INFO: Waiting for pod pod-subpath-test-downwardapi-p646 to disappear
    Sep  5 16:21:32.167: INFO: Pod pod-subpath-test-downwardapi-p646 no longer exists
    STEP: Deleting pod pod-subpath-test-downwardapi-p646
    Sep  5 16:21:32.167: INFO: Deleting pod "pod-subpath-test-downwardapi-p646" in namespace "subpath-5704"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:21:32.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-5704" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":80,"skipped":1447,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 28 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:21:39.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-5392" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":81,"skipped":1460,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:21:39.387: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-map-efbcf488-ccaa-4242-8c9e-1a9c33028f3c
    STEP: Creating a pod to test consume configMaps
    Sep  5 16:21:39.435: INFO: Waiting up to 5m0s for pod "pod-configmaps-1e332910-f1e5-4707-aa52-8d10e2bd2d4d" in namespace "configmap-4869" to be "Succeeded or Failed"

    Sep  5 16:21:39.440: INFO: Pod "pod-configmaps-1e332910-f1e5-4707-aa52-8d10e2bd2d4d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.557746ms
    Sep  5 16:21:41.445: INFO: Pod "pod-configmaps-1e332910-f1e5-4707-aa52-8d10e2bd2d4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008871584s
    Sep  5 16:21:43.450: INFO: Pod "pod-configmaps-1e332910-f1e5-4707-aa52-8d10e2bd2d4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014074835s
    STEP: Saw pod success
    Sep  5 16:21:43.450: INFO: Pod "pod-configmaps-1e332910-f1e5-4707-aa52-8d10e2bd2d4d" satisfied condition "Succeeded or Failed"

    Sep  5 16:21:43.454: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod pod-configmaps-1e332910-f1e5-4707-aa52-8d10e2bd2d4d container agnhost-container: <nil>
    STEP: delete the pod
    Sep  5 16:21:43.469: INFO: Waiting for pod pod-configmaps-1e332910-f1e5-4707-aa52-8d10e2bd2d4d to disappear
    Sep  5 16:21:43.472: INFO: Pod pod-configmaps-1e332910-f1e5-4707-aa52-8d10e2bd2d4d no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:21:43.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-4869" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":82,"skipped":1475,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:21:43.487: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-map-2faa752c-8f00-4a6f-84d2-25665d5d65b5
    STEP: Creating a pod to test consume secrets
    Sep  5 16:21:43.532: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0bc6090d-108f-4d2b-8e4c-605255287276" in namespace "projected-4610" to be "Succeeded or Failed"

    Sep  5 16:21:43.535: INFO: Pod "pod-projected-secrets-0bc6090d-108f-4d2b-8e4c-605255287276": Phase="Pending", Reason="", readiness=false. Elapsed: 3.502139ms
    Sep  5 16:21:45.541: INFO: Pod "pod-projected-secrets-0bc6090d-108f-4d2b-8e4c-605255287276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009010991s
    Sep  5 16:21:47.545: INFO: Pod "pod-projected-secrets-0bc6090d-108f-4d2b-8e4c-605255287276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013480203s
    STEP: Saw pod success
    Sep  5 16:21:47.545: INFO: Pod "pod-projected-secrets-0bc6090d-108f-4d2b-8e4c-605255287276" satisfied condition "Succeeded or Failed"

    Sep  5 16:21:47.550: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod pod-projected-secrets-0bc6090d-108f-4d2b-8e4c-605255287276 container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep  5 16:21:47.569: INFO: Waiting for pod pod-projected-secrets-0bc6090d-108f-4d2b-8e4c-605255287276 to disappear
    Sep  5 16:21:47.572: INFO: Pod pod-projected-secrets-0bc6090d-108f-4d2b-8e4c-605255287276 no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:21:47.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-4610" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":83,"skipped":1478,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  5 16:21:47.684: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4c07c61f-c890-4389-8eec-1077a0ec4ade" in namespace "downward-api-4771" to be "Succeeded or Failed"

    Sep  5 16:21:47.689: INFO: Pod "downwardapi-volume-4c07c61f-c890-4389-8eec-1077a0ec4ade": Phase="Pending", Reason="", readiness=false. Elapsed: 4.758697ms
    Sep  5 16:21:49.694: INFO: Pod "downwardapi-volume-4c07c61f-c890-4389-8eec-1077a0ec4ade": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00926507s
    Sep  5 16:21:51.698: INFO: Pod "downwardapi-volume-4c07c61f-c890-4389-8eec-1077a0ec4ade": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01372375s
    STEP: Saw pod success
    Sep  5 16:21:51.698: INFO: Pod "downwardapi-volume-4c07c61f-c890-4389-8eec-1077a0ec4ade" satisfied condition "Succeeded or Failed"

    Sep  5 16:21:51.702: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod downwardapi-volume-4c07c61f-c890-4389-8eec-1077a0ec4ade container client-container: <nil>
    STEP: delete the pod
    Sep  5 16:21:51.720: INFO: Waiting for pod downwardapi-volume-4c07c61f-c890-4389-8eec-1077a0ec4ade to disappear
    Sep  5 16:21:51.724: INFO: Pod downwardapi-volume-4c07c61f-c890-4389-8eec-1077a0ec4ade no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:21:51.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-4771" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":84,"skipped":1513,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:21:58.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-6552" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":85,"skipped":1516,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:22:00.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-4588" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":86,"skipped":1522,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:22:00.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-98" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":87,"skipped":1533,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Lease
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:22:01.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "lease-test-2381" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":88,"skipped":1568,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] PodTemplates
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:22:01.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "podtemplate-1934" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":89,"skipped":1635,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 61 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:22:07.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-6862" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":-1,"completed":90,"skipped":1649,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:22:14.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-9944" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":91,"skipped":1673,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] CronJob
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:24:00.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "cronjob-2489" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":92,"skipped":1703,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 43 lines ...
    STEP: Destroying namespace "services-2971" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":93,"skipped":1737,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":-1,"completed":68,"skipped":1095,"failed":3,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:18:58.996: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename dns
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 5 lines ...
    
    STEP: creating a pod to probe DNS
    STEP: submitting the pod to kubernetes
    STEP: retrieving the pod
    STEP: looking for the results for each expected name from probers
    Sep  5 16:22:35.186: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-1143/dns-test-10ed7b94-3deb-4168-b1e8-34954d306122: the server is currently unable to handle the request (get pods dns-test-10ed7b94-3deb-4168-b1e8-34954d306122)
    Sep  5 16:24:01.070: FAIL: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1143/dns-test-10ed7b94-3deb-4168-b1e8-34954d306122: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-1143/pods/dns-test-10ed7b94-3deb-4168-b1e8-34954d306122/proxy/results/wheezy_tcp@kubernetes.default.svc.cluster.local": context deadline exceeded

    
    Full Stack Trace
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x78de4a8, 0xc000124010, 0x7f4f92dcaf18, 0x18, 0xc00417c108)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x78de4a8, 0xc000124010, 0xc003cda0b0, 0x2a14500, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f
... skipping 17 lines ...
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b
    testing.tRunner(0xc000cc8000, 0x729a2d8)
    	/usr/local/go/src/testing/testing.go:1203 +0xe5
    created by testing.(*T).Run
    	/usr/local/go/src/testing/testing.go:1248 +0x2b3
    E0905 16:24:01.070844      18 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Sep  5 16:24:01.070: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1143/dns-test-10ed7b94-3deb-4168-b1e8-34954d306122: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-1143/pods/dns-test-10ed7b94-3deb-4168-b1e8-34954d306122/proxy/results/wheezy_tcp@kubernetes.default.svc.cluster.local\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:217, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x78de4a8, 0xc000124010, 0x7f4f92dcaf18, 0x18, 0xc00417c108)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x78de4a8, 0xc000124010, 0xc003cda0b0, 0x2a14500, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll(0x78de4a8, 0xc000124010, 0xc00417c101, 0xc00417c108, 0xc003cda0b0, 0x6826620, 0xc003cda0b0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:577 +0xe5\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext(0x78de4a8, 0xc000124010, 0x12a05f200, 0x8bb2c97000, 0xc003cda0b0, 0x6d6e4e0, 0x2521201)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:523 +0x66\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc003e801c0, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:509 +0x6f\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc001abf280, 0x8, 0x8, 0x702fe9b, 0x7, 0xc000079400, 0x7971668, 0xc003e3eb00, 0x0, 0x0, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463 +0x13c\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:457\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000f706e0, 0xc000079400, 0xc001abf280, 0x8, 0x8)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:520 +0x365\nk8s.io/kubernetes/test/e2e/network.glob..func2.1()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:64 +0x58a\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc000cc8000)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc000cc8000)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b\ntesting.tRunner(0xc000cc8000, 0x729a2d8)\n\t/usr/local/go/src/testing/testing.go:1203 +0xe5\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1248 +0x2b3"} (
    Your test failed.

    Ginkgo panics to prevent subsequent assertions from running.
    Normally Ginkgo rescues this panic so you shouldn't see it.
    
    But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
    To circumvent this, you should call
    
... skipping 5 lines ...
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6bbe4c0, 0xc004206200)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
    panic(0x6bbe4c0, 0xc004206200)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc000568420, 0x159, 0x88abe86, 0x7d, 0xd9, 0xc00020f400, 0xa87)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
    panic(0x62ef260, 0x77956f0)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc000568420, 0x159, 0xc0017df6d8, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:260 +0xc8
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc000568420, 0x159, 0xc0017df7c0, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
    k8s.io/kubernetes/test/e2e/framework.Failf(0x70d3e4f, 0x24, 0xc0017dfa20, 0x4, 0x4)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
    k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:480 +0xab1
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x78de4a8, 0xc000124010, 0x7f4f92dcaf18, 0x18, 0xc00417c108)
... skipping 58 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  5 16:24:01.070: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1143/dns-test-10ed7b94-3deb-4168-b1e8-34954d306122: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-1143/pods/dns-test-10ed7b94-3deb-4168-b1e8-34954d306122/proxy/results/wheezy_tcp@kubernetes.default.svc.cluster.local": context deadline exceeded
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217
    ------------------------------
    {"msg":"FAILED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":-1,"completed":68,"skipped":1095,"failed":4,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:24:03.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-6944" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":69,"skipped":1112,"failed":4,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:24:04.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "certificates-7240" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":70,"skipped":1116,"failed":4,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:24:00.736: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0644 on node default medium
    Sep  5 16:24:00.786: INFO: Waiting up to 5m0s for pod "pod-994bc1d9-f83c-4841-a6cd-2741303d634d" in namespace "emptydir-5221" to be "Succeeded or Failed"

    Sep  5 16:24:00.790: INFO: Pod "pod-994bc1d9-f83c-4841-a6cd-2741303d634d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099664ms
    Sep  5 16:24:02.795: INFO: Pod "pod-994bc1d9-f83c-4841-a6cd-2741303d634d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009376329s
    Sep  5 16:24:04.801: INFO: Pod "pod-994bc1d9-f83c-4841-a6cd-2741303d634d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015159156s
    STEP: Saw pod success
    Sep  5 16:24:04.801: INFO: Pod "pod-994bc1d9-f83c-4841-a6cd-2741303d634d" satisfied condition "Succeeded or Failed"

    Sep  5 16:24:04.805: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-md-0-85nn6-54859d8bd4-grv26 pod pod-994bc1d9-f83c-4841-a6cd-2741303d634d container test-container: <nil>
    STEP: delete the pod
    Sep  5 16:24:04.834: INFO: Waiting for pod pod-994bc1d9-f83c-4841-a6cd-2741303d634d to disappear
    Sep  5 16:24:04.838: INFO: Pod pod-994bc1d9-f83c-4841-a6cd-2741303d634d no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:24:04.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-5221" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":94,"skipped":1798,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:24:06.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-6731" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":71,"skipped":1167,"failed":4,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:24:06.314: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context-test
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
    [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep  5 16:24:06.360: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-e7f0d816-1c0f-4834-8de7-b7926cf42e53" in namespace "security-context-test-7626" to be "Succeeded or Failed"

    Sep  5 16:24:06.363: INFO: Pod "busybox-readonly-false-e7f0d816-1c0f-4834-8de7-b7926cf42e53": Phase="Pending", Reason="", readiness=false. Elapsed: 3.063777ms
    Sep  5 16:24:08.368: INFO: Pod "busybox-readonly-false-e7f0d816-1c0f-4834-8de7-b7926cf42e53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007649818s
    Sep  5 16:24:10.373: INFO: Pod "busybox-readonly-false-e7f0d816-1c0f-4834-8de7-b7926cf42e53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012678507s
    Sep  5 16:24:10.373: INFO: Pod "busybox-readonly-false-e7f0d816-1c0f-4834-8de7-b7926cf42e53" satisfied condition "Succeeded or Failed"

    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:24:10.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-test-7626" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":72,"skipped":1176,"failed":4,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:24:10.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-1070" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":73,"skipped":1178,"failed":4,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:24:26.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-probe-9962" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":95,"skipped":1810,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
    Sep  5 16:24:12.656: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:12.661: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:12.667: INFO: Unable to read jessie_udp@dns-test-service.dns-2817 from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:12.671: INFO: Unable to read jessie_tcp@dns-test-service.dns-2817 from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:12.674: INFO: Unable to read jessie_udp@dns-test-service.dns-2817.svc from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:12.679: INFO: Unable to read jessie_tcp@dns-test-service.dns-2817.svc from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:12.717: INFO: Lookups using dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2817 wheezy_tcp@dns-test-service.dns-2817 wheezy_udp@dns-test-service.dns-2817.svc wheezy_tcp@dns-test-service.dns-2817.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2817 jessie_tcp@dns-test-service.dns-2817 jessie_udp@dns-test-service.dns-2817.svc jessie_tcp@dns-test-service.dns-2817.svc]

    
    Sep  5 16:24:17.723: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:17.727: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:17.732: INFO: Unable to read wheezy_udp@dns-test-service.dns-2817 from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:17.736: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2817 from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:17.739: INFO: Unable to read wheezy_udp@dns-test-service.dns-2817.svc from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:17.744: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2817.svc from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:17.779: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:17.783: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:17.788: INFO: Unable to read jessie_udp@dns-test-service.dns-2817 from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:17.792: INFO: Unable to read jessie_tcp@dns-test-service.dns-2817 from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:17.796: INFO: Unable to read jessie_udp@dns-test-service.dns-2817.svc from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:17.801: INFO: Unable to read jessie_tcp@dns-test-service.dns-2817.svc from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:17.835: INFO: Lookups using dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2817 wheezy_tcp@dns-test-service.dns-2817 wheezy_udp@dns-test-service.dns-2817.svc wheezy_tcp@dns-test-service.dns-2817.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2817 jessie_tcp@dns-test-service.dns-2817 jessie_udp@dns-test-service.dns-2817.svc jessie_tcp@dns-test-service.dns-2817.svc]

    
    Sep  5 16:24:22.723: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:22.727: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:22.731: INFO: Unable to read wheezy_udp@dns-test-service.dns-2817 from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:22.735: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2817 from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:22.741: INFO: Unable to read wheezy_udp@dns-test-service.dns-2817.svc from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:22.747: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2817.svc from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:22.788: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:22.792: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:22.795: INFO: Unable to read jessie_udp@dns-test-service.dns-2817 from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:22.798: INFO: Unable to read jessie_tcp@dns-test-service.dns-2817 from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:22.802: INFO: Unable to read jessie_udp@dns-test-service.dns-2817.svc from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:22.809: INFO: Unable to read jessie_tcp@dns-test-service.dns-2817.svc from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:22.837: INFO: Lookups using dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2817 wheezy_tcp@dns-test-service.dns-2817 wheezy_udp@dns-test-service.dns-2817.svc wheezy_tcp@dns-test-service.dns-2817.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2817 jessie_tcp@dns-test-service.dns-2817 jessie_udp@dns-test-service.dns-2817.svc jessie_tcp@dns-test-service.dns-2817.svc]

    
    Sep  5 16:24:27.723: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:27.727: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:27.732: INFO: Unable to read wheezy_udp@dns-test-service.dns-2817 from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:27.737: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2817 from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:27.745: INFO: Unable to read wheezy_udp@dns-test-service.dns-2817.svc from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:27.749: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2817.svc from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:27.786: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:27.790: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:27.794: INFO: Unable to read jessie_udp@dns-test-service.dns-2817 from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:27.797: INFO: Unable to read jessie_tcp@dns-test-service.dns-2817 from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:27.802: INFO: Unable to read jessie_udp@dns-test-service.dns-2817.svc from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:27.805: INFO: Unable to read jessie_tcp@dns-test-service.dns-2817.svc from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:27.837: INFO: Lookups using dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2817 wheezy_tcp@dns-test-service.dns-2817 wheezy_udp@dns-test-service.dns-2817.svc wheezy_tcp@dns-test-service.dns-2817.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2817 jessie_tcp@dns-test-service.dns-2817 jessie_udp@dns-test-service.dns-2817.svc jessie_tcp@dns-test-service.dns-2817.svc]

    
    Sep  5 16:24:32.721: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:32.724: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:32.728: INFO: Unable to read wheezy_udp@dns-test-service.dns-2817 from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:32.732: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2817 from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:32.736: INFO: Unable to read wheezy_udp@dns-test-service.dns-2817.svc from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:32.739: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2817.svc from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:32.777: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:32.783: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:32.787: INFO: Unable to read jessie_udp@dns-test-service.dns-2817 from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:32.790: INFO: Unable to read jessie_tcp@dns-test-service.dns-2817 from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:32.795: INFO: Unable to read jessie_udp@dns-test-service.dns-2817.svc from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:32.799: INFO: Unable to read jessie_tcp@dns-test-service.dns-2817.svc from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:32.832: INFO: Lookups using dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2817 wheezy_tcp@dns-test-service.dns-2817 wheezy_udp@dns-test-service.dns-2817.svc wheezy_tcp@dns-test-service.dns-2817.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2817 jessie_tcp@dns-test-service.dns-2817 jessie_udp@dns-test-service.dns-2817.svc jessie_tcp@dns-test-service.dns-2817.svc]

    
    Sep  5 16:24:37.723: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:37.727: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:37.731: INFO: Unable to read wheezy_udp@dns-test-service.dns-2817 from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:37.734: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2817 from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:37.738: INFO: Unable to read wheezy_udp@dns-test-service.dns-2817.svc from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:37.741: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2817.svc from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:37.775: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:37.779: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:37.784: INFO: Unable to read jessie_udp@dns-test-service.dns-2817 from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:37.789: INFO: Unable to read jessie_tcp@dns-test-service.dns-2817 from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:37.793: INFO: Unable to read jessie_udp@dns-test-service.dns-2817.svc from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:37.798: INFO: Unable to read jessie_tcp@dns-test-service.dns-2817.svc from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:37.831: INFO: Lookups using dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2817 wheezy_tcp@dns-test-service.dns-2817 wheezy_udp@dns-test-service.dns-2817.svc wheezy_tcp@dns-test-service.dns-2817.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2817 jessie_tcp@dns-test-service.dns-2817 jessie_udp@dns-test-service.dns-2817.svc jessie_tcp@dns-test-service.dns-2817.svc]

    
    Sep  5 16:24:42.726: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:42.731: INFO: Unable to read wheezy_udp@dns-test-service.dns-2817 from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:42.736: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2817 from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:42.741: INFO: Unable to read wheezy_udp@dns-test-service.dns-2817.svc from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:42.745: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2817.svc from pod dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31: the server could not find the requested resource (get pods dns-test-954faed2-258d-43f9-8a30-16d7766a9f31)
    Sep  5 16:24:42.853: INFO: Lookups using dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31 failed for: [wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2817 wheezy_tcp@dns-test-service.dns-2817 wheezy_udp@dns-test-service.dns-2817.svc wheezy_tcp@dns-test-service.dns-2817.svc]

    
    Sep  5 16:24:47.832: INFO: DNS probes using dns-2817/dns-test-954faed2-258d-43f9-8a30-16d7766a9f31 succeeded
    
    STEP: deleting the pod
    STEP: deleting the test service
    STEP: deleting the test headless service
    [AfterEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:24:47.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-2817" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":74,"skipped":1205,"failed":4,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
... skipping 4 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65
    [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
    STEP: Watching for error events or started pod

    STEP: Waiting for pod completion
    STEP: Checking that the pod succeeded
    STEP: Getting logs from the pod
    STEP: Checking that the sysctl is actually updated
    [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:24:52.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "sysctl-2442" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":75,"skipped":1217,"failed":4,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:24:53.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-5608" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":96,"skipped":1817,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] server version
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:24:53.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "server-version-9383" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":97,"skipped":1833,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:24:52.269: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test env composition
    Sep  5 16:24:52.323: INFO: Waiting up to 5m0s for pod "var-expansion-206382d8-c568-4df7-96b0-33e84532ee6f" in namespace "var-expansion-5047" to be "Succeeded or Failed"

    Sep  5 16:24:52.331: INFO: Pod "var-expansion-206382d8-c568-4df7-96b0-33e84532ee6f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.559212ms
    Sep  5 16:24:54.338: INFO: Pod "var-expansion-206382d8-c568-4df7-96b0-33e84532ee6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01492584s
    Sep  5 16:24:56.343: INFO: Pod "var-expansion-206382d8-c568-4df7-96b0-33e84532ee6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019845692s
    STEP: Saw pod success
    Sep  5 16:24:56.343: INFO: Pod "var-expansion-206382d8-c568-4df7-96b0-33e84532ee6f" satisfied condition "Succeeded or Failed"

    Sep  5 16:24:56.346: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod var-expansion-206382d8-c568-4df7-96b0-33e84532ee6f container dapi-container: <nil>
    STEP: delete the pod
    Sep  5 16:24:56.362: INFO: Waiting for pod var-expansion-206382d8-c568-4df7-96b0-33e84532ee6f to disappear
    Sep  5 16:24:56.366: INFO: Pod var-expansion-206382d8-c568-4df7-96b0-33e84532ee6f no longer exists
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:24:56.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-5047" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":76,"skipped":1244,"failed":4,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:24:56.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-7520" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info  [Conformance]","total":-1,"completed":77,"skipped":1248,"failed":4,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 47 lines ...
    STEP: Destroying namespace "webhook-6448-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":78,"skipped":1250,"failed":4,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":98,"skipped":1845,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    [BeforeEach] [sig-apps] DisruptionController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:24:57.413: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename disruption
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:25:01.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "disruption-5858" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":99,"skipped":1845,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:25:01.584: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename containers
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test override all
    Sep  5 16:25:01.715: INFO: Waiting up to 5m0s for pod "client-containers-ab76c97f-f31f-48bc-b91a-d343123120f0" in namespace "containers-657" to be "Succeeded or Failed"

    Sep  5 16:25:01.725: INFO: Pod "client-containers-ab76c97f-f31f-48bc-b91a-d343123120f0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.167388ms
    Sep  5 16:25:03.732: INFO: Pod "client-containers-ab76c97f-f31f-48bc-b91a-d343123120f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016769973s
    Sep  5 16:25:05.747: INFO: Pod "client-containers-ab76c97f-f31f-48bc-b91a-d343123120f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032087122s
    STEP: Saw pod success
    Sep  5 16:25:05.747: INFO: Pod "client-containers-ab76c97f-f31f-48bc-b91a-d343123120f0" satisfied condition "Succeeded or Failed"

    Sep  5 16:25:05.766: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod client-containers-ab76c97f-f31f-48bc-b91a-d343123120f0 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  5 16:25:05.796: INFO: Waiting for pod client-containers-ab76c97f-f31f-48bc-b91a-d343123120f0 to disappear
    Sep  5 16:25:05.801: INFO: Pod client-containers-ab76c97f-f31f-48bc-b91a-d343123120f0 no longer exists
    [AfterEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:25:05.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "containers-657" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":100,"skipped":1847,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 34 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:25:06.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-6136" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":79,"skipped":1265,"failed":4,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
    STEP: Destroying namespace "services-6889" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":80,"skipped":1301,"failed":4,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 29 lines ...
    Sep  5 16:25:24.049: INFO: Pod pod-with-prestop-exec-hook still exists
    Sep  5 16:25:26.044: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
    Sep  5 16:25:26.048: INFO: Pod pod-with-prestop-exec-hook still exists
    Sep  5 16:25:28.045: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
    Sep  5 16:25:28.049: INFO: Pod pod-with-prestop-exec-hook no longer exists
    STEP: check prestop hook
    Sep  5 16:25:58.050: FAIL: Timed out after 30.001s.

    Expected
        <*errors.errorString | 0xc004c65d20>: {
            s: "failed to match regexp \"GET /echo\\\\?msg=prestop\" in output \"I0905 16:25:06.642746       1 log.go:195] Started HTTP server on port 8080\\nI0905 16:25:06.644667       1 log.go:195] Started UDP server on port  8081\\n\"",

        }
    to be nil
    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/common/node.glob..func11.1.2(0xc003294400)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:79 +0x342
... skipping 21 lines ...
        should execute prestop exec hook properly [NodeConformance] [Conformance] [It]
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
        Sep  5 16:25:58.050: Timed out after 30.001s.
        Expected
            <*errors.errorString | 0xc004c65d20>: {
                s: "failed to match regexp \"GET /echo\\\\?msg=prestop\" in output \"I0905 16:25:06.642746       1 log.go:195] Started HTTP server on port 8080\\nI0905 16:25:06.644667       1 log.go:195] Started UDP server on port  8081\\n\"",

            }
        to be nil
    
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:79
    ------------------------------
    {"msg":"FAILED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":100,"skipped":1877,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

    [BeforeEach] [sig-node] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:25:58.064: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename container-lifecycle-hook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:26:04.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-lifecycle-hook-9258" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":101,"skipped":1877,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:26:08.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-8648" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":102,"skipped":1881,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide container's memory request [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  5 16:26:08.409: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d402fa99-48d3-448c-96ad-9083c066db9f" in namespace "downward-api-9716" to be "Succeeded or Failed"

    Sep  5 16:26:08.412: INFO: Pod "downwardapi-volume-d402fa99-48d3-448c-96ad-9083c066db9f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.016351ms
    Sep  5 16:26:10.417: INFO: Pod "downwardapi-volume-d402fa99-48d3-448c-96ad-9083c066db9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007619262s
    Sep  5 16:26:12.421: INFO: Pod "downwardapi-volume-d402fa99-48d3-448c-96ad-9083c066db9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012336699s
    STEP: Saw pod success
    Sep  5 16:26:12.422: INFO: Pod "downwardapi-volume-d402fa99-48d3-448c-96ad-9083c066db9f" satisfied condition "Succeeded or Failed"

    Sep  5 16:26:12.426: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-md-0-85nn6-54859d8bd4-grv26 pod downwardapi-volume-d402fa99-48d3-448c-96ad-9083c066db9f container client-container: <nil>
    STEP: delete the pod
    Sep  5 16:26:12.445: INFO: Waiting for pod downwardapi-volume-d402fa99-48d3-448c-96ad-9083c066db9f to disappear
    Sep  5 16:26:12.449: INFO: Pod downwardapi-volume-d402fa99-48d3-448c-96ad-9083c066db9f no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:26:12.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-9716" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":103,"skipped":1891,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
    STEP: Listing all of the created validation webhooks
    Sep  5 16:26:26.001: INFO: Waiting for webhook configuration to be ready...
    Sep  5 16:26:36.125: INFO: Waiting for webhook configuration to be ready...
    Sep  5 16:26:46.233: INFO: Waiting for webhook configuration to be ready...
    Sep  5 16:26:56.325: INFO: Waiting for webhook configuration to be ready...
    Sep  5 16:27:06.348: INFO: Waiting for webhook configuration to be ready...
    Sep  5 16:27:06.349: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc0002bc280>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 21 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      listing validating webhooks should work [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  5 16:27:06.349: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc0002bc280>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:606
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":103,"skipped":1897,"failed":8,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:27:06.424: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 20 lines ...
    STEP: Destroying namespace "webhook-2922-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":104,"skipped":1897,"failed":8,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:27:23.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-1014" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":105,"skipped":1914,"failed":8,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 133 lines ...
    Sep  5 16:27:20.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-973 exec execpod-affinity6n9xt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep  5 16:27:22.699: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n"
    Sep  5 16:27:22.699: INFO: stdout: ""
    Sep  5 16:27:22.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-973 exec execpod-affinity6n9xt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
    Sep  5 16:27:24.880: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n"
    Sep  5 16:27:24.880: INFO: stdout: ""
    Sep  5 16:27:24.881: FAIL: Unexpected error:

        <*errors.errorString | 0xc005f58330>: {
            s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip-transition:80 over TCP protocol",
        }
        service is not reachable within 2m0s timeout on endpoint affinity-clusterip-transition:80 over TCP protocol
    occurred
    
... skipping 27 lines ...
    • Failure [133.291 seconds]
    [sig-network] Services
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
      should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  5 16:27:24.881: Unexpected error:

          <*errors.errorString | 0xc005f58330>: {
              s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip-transition:80 over TCP protocol",
          }
          service is not reachable within 2m0s timeout on endpoint affinity-clusterip-transition:80 over TCP protocol
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3278
    ------------------------------
    {"msg":"FAILED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":80,"skipped":1326,"failed":5,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:27:27.516: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename services
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 62 lines ...
    STEP: Destroying namespace "services-630" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":81,"skipped":1326,"failed":5,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Job
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:27:58.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "job-4016" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":106,"skipped":1918,"failed":8,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 3 lines ...
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188
    [It] should contain environment variables for services [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep  5 16:27:58.241: INFO: The status of Pod server-envvars-c480f3c4-0b2f-44ca-954a-53079e11adbc is Pending, waiting for it to be Running (with Ready = true)
    Sep  5 16:28:00.247: INFO: The status of Pod server-envvars-c480f3c4-0b2f-44ca-954a-53079e11adbc is Running (Ready = true)
    Sep  5 16:28:00.288: INFO: Waiting up to 5m0s for pod "client-envvars-fc519c9c-b618-4225-8733-55e0b1c85c5a" in namespace "pods-6575" to be "Succeeded or Failed"

    Sep  5 16:28:00.294: INFO: Pod "client-envvars-fc519c9c-b618-4225-8733-55e0b1c85c5a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.970661ms
    Sep  5 16:28:02.299: INFO: Pod "client-envvars-fc519c9c-b618-4225-8733-55e0b1c85c5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011090968s
    Sep  5 16:28:04.306: INFO: Pod "client-envvars-fc519c9c-b618-4225-8733-55e0b1c85c5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017365168s
    STEP: Saw pod success
    Sep  5 16:28:04.306: INFO: Pod "client-envvars-fc519c9c-b618-4225-8733-55e0b1c85c5a" satisfied condition "Succeeded or Failed"

    Sep  5 16:28:04.310: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-0xh5an pod client-envvars-fc519c9c-b618-4225-8733-55e0b1c85c5a container env3cont: <nil>
    STEP: delete the pod
    Sep  5 16:28:04.348: INFO: Waiting for pod client-envvars-fc519c9c-b618-4225-8733-55e0b1c85c5a to disappear
    Sep  5 16:28:04.353: INFO: Pod client-envvars-fc519c9c-b618-4225-8733-55e0b1c85c5a no longer exists
    [AfterEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:28:04.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-6575" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":107,"skipped":1947,"failed":8,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:28:04.443: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0666 on node default medium
    Sep  5 16:28:04.515: INFO: Waiting up to 5m0s for pod "pod-9c362b8a-cf05-4591-9db3-cd299395c123" in namespace "emptydir-1672" to be "Succeeded or Failed"

    Sep  5 16:28:04.523: INFO: Pod "pod-9c362b8a-cf05-4591-9db3-cd299395c123": Phase="Pending", Reason="", readiness=false. Elapsed: 7.661588ms
    Sep  5 16:28:06.528: INFO: Pod "pod-9c362b8a-cf05-4591-9db3-cd299395c123": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013151798s
    Sep  5 16:28:08.535: INFO: Pod "pod-9c362b8a-cf05-4591-9db3-cd299395c123": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019631705s
    STEP: Saw pod success
    Sep  5 16:28:08.535: INFO: Pod "pod-9c362b8a-cf05-4591-9db3-cd299395c123" satisfied condition "Succeeded or Failed"

    Sep  5 16:28:08.539: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod pod-9c362b8a-cf05-4591-9db3-cd299395c123 container test-container: <nil>
    STEP: delete the pod
    Sep  5 16:28:08.576: INFO: Waiting for pod pod-9c362b8a-cf05-4591-9db3-cd299395c123 to disappear
    Sep  5 16:28:08.581: INFO: Pod pod-9c362b8a-cf05-4591-9db3-cd299395c123 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:28:08.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-1672" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":108,"skipped":1971,"failed":8,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:28:09.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-1770" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":-1,"completed":109,"skipped":2001,"failed":8,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:28:11.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-8992" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":110,"skipped":2002,"failed":8,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:28:11.681: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide host IP as an env var [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep  5 16:28:11.741: INFO: Waiting up to 5m0s for pod "downward-api-22104aef-22ca-4e03-a376-5dad8d02ef20" in namespace "downward-api-7974" to be "Succeeded or Failed"

    Sep  5 16:28:11.746: INFO: Pod "downward-api-22104aef-22ca-4e03-a376-5dad8d02ef20": Phase="Pending", Reason="", readiness=false. Elapsed: 4.581411ms
    Sep  5 16:28:13.751: INFO: Pod "downward-api-22104aef-22ca-4e03-a376-5dad8d02ef20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009397814s
    Sep  5 16:28:15.756: INFO: Pod "downward-api-22104aef-22ca-4e03-a376-5dad8d02ef20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014387887s
    STEP: Saw pod success
    Sep  5 16:28:15.756: INFO: Pod "downward-api-22104aef-22ca-4e03-a376-5dad8d02ef20" satisfied condition "Succeeded or Failed"

    Sep  5 16:28:15.759: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod downward-api-22104aef-22ca-4e03-a376-5dad8d02ef20 container dapi-container: <nil>
    STEP: delete the pod
    Sep  5 16:28:15.778: INFO: Waiting for pod downward-api-22104aef-22ca-4e03-a376-5dad8d02ef20 to disappear
    Sep  5 16:28:15.781: INFO: Pod downward-api-22104aef-22ca-4e03-a376-5dad8d02ef20 no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:28:15.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-7974" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":111,"skipped":2025,"failed":8,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] KubeletManagedEtcHosts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 47 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:28:20.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "e2e-kubelet-etc-hosts-2285" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":112,"skipped":2066,"failed":8,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:28:20.951: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-map-e9ff1838-ae4a-40f3-86a1-b2c7b132be55
    STEP: Creating a pod to test consume secrets
    Sep  5 16:28:21.010: INFO: Waiting up to 5m0s for pod "pod-secrets-b353c302-7753-482d-8ebc-0bcb7740cfda" in namespace "secrets-3961" to be "Succeeded or Failed"

    Sep  5 16:28:21.015: INFO: Pod "pod-secrets-b353c302-7753-482d-8ebc-0bcb7740cfda": Phase="Pending", Reason="", readiness=false. Elapsed: 4.797017ms
    Sep  5 16:28:23.020: INFO: Pod "pod-secrets-b353c302-7753-482d-8ebc-0bcb7740cfda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010081943s
    Sep  5 16:28:25.027: INFO: Pod "pod-secrets-b353c302-7753-482d-8ebc-0bcb7740cfda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016735878s
    STEP: Saw pod success
    Sep  5 16:28:25.027: INFO: Pod "pod-secrets-b353c302-7753-482d-8ebc-0bcb7740cfda" satisfied condition "Succeeded or Failed"

    Sep  5 16:28:25.031: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod pod-secrets-b353c302-7753-482d-8ebc-0bcb7740cfda container secret-volume-test: <nil>
    STEP: delete the pod
    Sep  5 16:28:25.050: INFO: Waiting for pod pod-secrets-b353c302-7753-482d-8ebc-0bcb7740cfda to disappear
    Sep  5 16:28:25.053: INFO: Pod pod-secrets-b353c302-7753-482d-8ebc-0bcb7740cfda no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:28:25.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-3961" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":113,"skipped":2069,"failed":8,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:28:25.068: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name projected-secret-test-e3984003-5abc-43e7-b594-1e197b7ddada
    STEP: Creating a pod to test consume secrets
    Sep  5 16:28:25.117: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-918c6113-5ea3-4346-b677-1065282a82fc" in namespace "projected-7185" to be "Succeeded or Failed"

    Sep  5 16:28:25.126: INFO: Pod "pod-projected-secrets-918c6113-5ea3-4346-b677-1065282a82fc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.373315ms
    Sep  5 16:28:27.131: INFO: Pod "pod-projected-secrets-918c6113-5ea3-4346-b677-1065282a82fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013428359s
    Sep  5 16:28:29.137: INFO: Pod "pod-projected-secrets-918c6113-5ea3-4346-b677-1065282a82fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018709609s
    STEP: Saw pod success
    Sep  5 16:28:29.137: INFO: Pod "pod-projected-secrets-918c6113-5ea3-4346-b677-1065282a82fc" satisfied condition "Succeeded or Failed"

    Sep  5 16:28:29.139: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-c48v4q pod pod-projected-secrets-918c6113-5ea3-4346-b677-1065282a82fc container secret-volume-test: <nil>
    STEP: delete the pod
    Sep  5 16:28:29.155: INFO: Waiting for pod pod-projected-secrets-918c6113-5ea3-4346-b677-1065282a82fc to disappear
    Sep  5 16:28:29.158: INFO: Pod pod-projected-secrets-918c6113-5ea3-4346-b677-1065282a82fc no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:28:29.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-7185" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":114,"skipped":2072,"failed":8,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:28:29.185: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-35f66ddb-1431-46b5-a50c-3885b1caea50
    STEP: Creating a pod to test consume secrets
    Sep  5 16:28:29.239: INFO: Waiting up to 5m0s for pod "pod-secrets-07941dc5-d7a4-4b1f-962f-d46906d3e5be" in namespace "secrets-6233" to be "Succeeded or Failed"

    Sep  5 16:28:29.243: INFO: Pod "pod-secrets-07941dc5-d7a4-4b1f-962f-d46906d3e5be": Phase="Pending", Reason="", readiness=false. Elapsed: 3.246665ms
    Sep  5 16:28:31.248: INFO: Pod "pod-secrets-07941dc5-d7a4-4b1f-962f-d46906d3e5be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008617123s
    Sep  5 16:28:33.254: INFO: Pod "pod-secrets-07941dc5-d7a4-4b1f-962f-d46906d3e5be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015096463s
    STEP: Saw pod success
    Sep  5 16:28:33.255: INFO: Pod "pod-secrets-07941dc5-d7a4-4b1f-962f-d46906d3e5be" satisfied condition "Succeeded or Failed"

    Sep  5 16:28:33.259: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-0xh5an pod pod-secrets-07941dc5-d7a4-4b1f-962f-d46906d3e5be container secret-volume-test: <nil>
    STEP: delete the pod
    Sep  5 16:28:33.277: INFO: Waiting for pod pod-secrets-07941dc5-d7a4-4b1f-962f-d46906d3e5be to disappear
    Sep  5 16:28:33.286: INFO: Pod pod-secrets-07941dc5-d7a4-4b1f-962f-d46906d3e5be no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 24 lines ...
    STEP: Registering slow webhook via the AdmissionRegistration API
    Sep  5 16:27:53.879: INFO: Waiting for webhook configuration to be ready...
    Sep  5 16:28:03.992: INFO: Waiting for webhook configuration to be ready...
    Sep  5 16:28:14.095: INFO: Waiting for webhook configuration to be ready...
    Sep  5 16:28:24.194: INFO: Waiting for webhook configuration to be ready...
    Sep  5 16:28:34.211: INFO: Waiting for webhook configuration to be ready...
    Sep  5 16:28:34.212: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc000242280>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 23 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should honor timeout [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  5 16:28:34.212: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc000242280>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:2188
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":115,"skipped":2079,"failed":8,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}

    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:28:33.298: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0777 on node default medium
    Sep  5 16:28:33.347: INFO: Waiting up to 5m0s for pod "pod-27feb5bc-d27b-4a89-abea-08172642471a" in namespace "emptydir-637" to be "Succeeded or Failed"

    Sep  5 16:28:33.350: INFO: Pod "pod-27feb5bc-d27b-4a89-abea-08172642471a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.359299ms
    Sep  5 16:28:35.354: INFO: Pod "pod-27feb5bc-d27b-4a89-abea-08172642471a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007812178s
    Sep  5 16:28:37.359: INFO: Pod "pod-27feb5bc-d27b-4a89-abea-08172642471a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012518268s
    STEP: Saw pod success
    Sep  5 16:28:37.359: INFO: Pod "pod-27feb5bc-d27b-4a89-abea-08172642471a" satisfied condition "Succeeded or Failed"

    Sep  5 16:28:37.363: INFO: Trying to get logs from node k8s-upgrade-and-conformance-rbkcco-worker-0xh5an pod pod-27feb5bc-d27b-4a89-abea-08172642471a container test-container: <nil>
    STEP: delete the pod
    Sep  5 16:28:37.381: INFO: Waiting for pod pod-27feb5bc-d27b-4a89-abea-08172642471a to disappear
    Sep  5 16:28:37.383: INFO: Pod pod-27feb5bc-d27b-4a89-abea-08172642471a no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:28:37.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-637" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":116,"skipped":2079,"failed":8,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}

    
    S
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":81,"skipped":1380,"failed":6,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  5 16:28:34.312: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 13 lines ...
    STEP: Registering slow webhook via the AdmissionRegistration API
    Sep  5 16:28:48.121: INFO: Waiting for webhook configuration to be ready...
    Sep  5 16:28:58.233: INFO: Waiting for webhook configuration to be ready...
    Sep  5 16:29:08.344: INFO: Waiting for webhook configuration to be ready...
    Sep  5 16:29:18.434: INFO: Waiting for webhook configuration to be ready...
    Sep  5 16:29:28.445: INFO: Waiting for webhook configuration to be ready...
    Sep  5 16:29:28.445: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc000242280>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 23 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should honor timeout [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  5 16:29:28.445: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc000242280>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
... skipping 24 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:29:40.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-watch-5877" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":117,"skipped":2080,"failed":8,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 31 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:29:44.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-5858" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":118,"skipped":2103,"failed":8,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  5 16:29:49.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-4919" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":119,"skipped":2108,"failed":8,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"