This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2020-05-31 11:54
Elapsed2h0m
Revisionmaster
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/ed1a2a8a-eb85-4c3c-9ac5-b5cd1204f21c/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/ed1a2a8a-eb85-4c3c-9ac5-b5cd1204f21c/targets/test

No Test Failures!


Error lines from build-log.txt

... skipping 69 lines ...
Analyzing: 4 targets (20 packages loaded, 27 targets configured)
Analyzing: 4 targets (372 packages loaded, 7893 targets configured)
Analyzing: 4 targets (1589 packages loaded, 12410 targets configured)
Analyzing: 4 targets (2268 packages loaded, 15447 targets configured)
Analyzing: 4 targets (2268 packages loaded, 15447 targets configured)
Analyzing: 4 targets (2269 packages loaded, 15447 targets configured)
DEBUG: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/bazel_gazelle/internal/go_repository.bzl:184:13: org_golang_x_tools: gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go:2:43: expected 'package', found 'EOF'
gazelle: found packages escapeinfo (escapeinfo.go) and lib (issue27856.go) in /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/internal/gccgoimporter/testdata
gazelle: found packages p (issue15920.go) and issue25301 (issue25301.go) in /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/internal/gcimporter/testdata
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go:1:34: expected 'package', found 'EOF'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go:1:16: expected ';', found '.'
gazelle: finding module path for import domain.name/importdecl: exit status 1: can't load package: package domain.name/importdecl: cannot find module providing package domain.name/importdecl
gazelle: finding module path for import old.com/one: exit status 1: can't load package: package old.com/one: cannot find module providing package old.com/one
gazelle: finding module path for import titanic.biz/bar: exit status 1: can't load package: package titanic.biz/bar: cannot find module providing package titanic.biz/bar
gazelle: finding module path for import titanic.biz/foo: exit status 1: can't load package: package titanic.biz/foo: cannot find module providing package titanic.biz/foo
gazelle: finding module path for import fruit.io/pear: exit status 1: can't load package: package fruit.io/pear: cannot find module providing package fruit.io/pear
gazelle: finding module path for import fruit.io/banana: exit status 1: can't load package: package fruit.io/banana: cannot find module providing package fruit.io/banana
... skipping 157 lines ...
WARNING: Waiting for server process to terminate (waited 10 seconds, waiting at most 60)
WARNING: Waiting for server process to terminate (waited 30 seconds, waiting at most 60)
INFO: Waited 60 seconds for server process (pid=5814) to terminate.
WARNING: Waiting for server process to terminate (waited 5 seconds, waiting at most 10)
WARNING: Waiting for server process to terminate (waited 10 seconds, waiting at most 10)
INFO: Waited 10 seconds for server process (pid=5814) to terminate.
FATAL: Attempted to kill stale server process (pid=5814) using SIGKILL, but it did not die in a timely fashion.
+ true
+ pkill ^bazel
+ true
+ mkdir -p _output/bin/
+ cp bazel-bin/test/e2e/e2e.test _output/bin/
+ find /home/prow/go/src/k8s.io/kubernetes/bazel-bin/ -name kubectl -type f
... skipping 46 lines ...
localAPIEndpoint:
  advertiseAddress: 172.18.0.3
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.3
---
apiVersion: kubeadm.k8s.io/v1beta2
controlPlane:
  localAPIEndpoint:
    advertiseAddress: 172.18.0.3
... skipping 4 lines ...
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.3
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
... skipping 32 lines ...
localAPIEndpoint:
  advertiseAddress: 172.18.0.4
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.4
---
apiVersion: kubeadm.k8s.io/v1beta2
discovery:
  bootstrapToken:
    apiServerEndpoint: kind-control-plane:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.4
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
... skipping 32 lines ...
localAPIEndpoint:
  advertiseAddress: 172.18.0.2
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.2
---
apiVersion: kubeadm.k8s.io/v1beta2
discovery:
  bootstrapToken:
    apiServerEndpoint: kind-control-plane:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.2
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
... skipping 37 lines ...
I0531 12:06:28.978139     256 checks.go:376] validating the presence of executable ebtables
I0531 12:06:28.978180     256 checks.go:376] validating the presence of executable ethtool
I0531 12:06:28.978209     256 checks.go:376] validating the presence of executable socat
I0531 12:06:28.978310     256 checks.go:376] validating the presence of executable tc
I0531 12:06:28.978381     256 checks.go:376] validating the presence of executable touch
I0531 12:06:28.978565     256 checks.go:520] running all checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_PIDS: enabled
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0531 12:06:29.010409     256 checks.go:406] checking whether the given node name is reachable using net.LookupHost
I0531 12:06:29.040684     256 checks.go:618] validating kubelet version
I0531 12:06:29.472265     256 checks.go:128] validating if the "kubelet" service is enabled and active
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
... skipping 93 lines ...
I0531 12:06:46.794269     256 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 33 milliseconds
I0531 12:06:47.291614     256 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 31 milliseconds
I0531 12:06:47.788177     256 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 24 milliseconds
I0531 12:06:48.300080     256 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 40 milliseconds
I0531 12:06:48.802366     256 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 35 milliseconds
I0531 12:06:59.259065     256 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 9999 milliseconds
I0531 12:07:03.515257     256 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 3756 milliseconds
I0531 12:07:03.769102     256 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 9 milliseconds
I0531 12:07:04.261000     256 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 1 milliseconds
I0531 12:07:04.760485     256 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 1 milliseconds
I0531 12:07:05.262783     256 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 200 OK in 3 milliseconds
[apiclient] All control plane components are healthy after 29.556373 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0531 12:07:05.262980     256 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
I0531 12:07:05.274946     256 round_trippers.go:443] POST https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 10 milliseconds
I0531 12:07:05.280309     256 round_trippers.go:443] POST https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 4 milliseconds
... skipping 109 lines ...
I0531 12:07:26.256416     690 checks.go:376] validating the presence of executable ebtables
I0531 12:07:26.256633     690 checks.go:376] validating the presence of executable ethtool
I0531 12:07:26.256710     690 checks.go:376] validating the presence of executable socat
I0531 12:07:26.256750     690 checks.go:376] validating the presence of executable tc
I0531 12:07:26.256812     690 checks.go:376] validating the presence of executable touch
I0531 12:07:26.256851     690 checks.go:520] running all checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_PIDS: enabled
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0531 12:07:26.276122     690 checks.go:406] checking whether the given node name is reachable using net.LookupHost
I0531 12:07:26.321157     690 checks.go:618] validating kubelet version
I0531 12:07:26.720216     690 checks.go:128] validating if the "kubelet" service is enabled and active
I0531 12:07:26.759020     690 checks.go:201] validating availability of port 10250
I0531 12:07:26.760118     690 checks.go:286] validating the existence of file /etc/kubernetes/pki/ca.crt
I0531 12:07:26.761299     690 checks.go:432] validating if the connectivity type is via proxy or direct
... skipping 69 lines ...
I0531 12:07:26.357036     687 checks.go:376] validating the presence of executable ebtables
I0531 12:07:26.357078     687 checks.go:376] validating the presence of executable ethtool
I0531 12:07:26.357111     687 checks.go:376] validating the presence of executable socat
I0531 12:07:26.357146     687 checks.go:376] validating the presence of executable tc
I0531 12:07:26.357170     687 checks.go:376] validating the presence of executable touch
I0531 12:07:26.357204     687 checks.go:520] running all checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_PIDS: enabled
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0531 12:07:26.395862     687 checks.go:406] checking whether the given node name is reachable using net.LookupHost
I0531 12:07:26.462531     687 checks.go:618] validating kubelet version
I0531 12:07:26.858526     687 checks.go:128] validating if the "kubelet" service is enabled and active
I0531 12:07:26.907303     687 checks.go:201] validating availability of port 10250
I0531 12:07:26.910747     687 checks.go:286] validating the existence of file /etc/kubernetes/pki/ca.crt
I0531 12:07:26.910788     687 checks.go:432] validating if the connectivity type is via proxy or direct
... skipping 71 lines ...
+ GINKGO_PID=11215
+ wait 11215
+ ./hack/ginkgo-e2e.sh --provider=skeleton --num-nodes=2 --ginkgo.focus=\[Conformance\] --ginkgo.skip= --report-dir=/logs/artifacts --disable-log-dump=true
Conformance test: not doing test setup.
I0531 12:08:05.200188   11902 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
I0531 12:08:05.200478   11902 e2e.go:129] Starting e2e run "e7111260-a50c-47e5-96ec-3b115bebfb7e" on Ginkgo node 1
{"msg":"Test Suite starting","total":292,"completed":0,"skipped":0,"failed":0}
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1590926883 - Will randomize all specs
Will run 292 of 5101 specs

May 31 12:08:05.289: INFO: >>> kubeConfig: /root/.kube/kind-test-config
... skipping 22 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 31 12:08:05.482: INFO: Waiting up to 5m0s for pod "downwardapi-volume-585feae1-a34c-4869-ae08-839d04f490ef" in namespace "downward-api-6835" to be "Succeeded or Failed"
May 31 12:08:05.488: INFO: Pod "downwardapi-volume-585feae1-a34c-4869-ae08-839d04f490ef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.131729ms
May 31 12:08:07.492: INFO: Pod "downwardapi-volume-585feae1-a34c-4869-ae08-839d04f490ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010496203s
May 31 12:08:09.504: INFO: Pod "downwardapi-volume-585feae1-a34c-4869-ae08-839d04f490ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021886663s
May 31 12:08:11.510: INFO: Pod "downwardapi-volume-585feae1-a34c-4869-ae08-839d04f490ef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028103644s
May 31 12:08:13.515: INFO: Pod "downwardapi-volume-585feae1-a34c-4869-ae08-839d04f490ef": Phase="Pending", Reason="", readiness=false. Elapsed: 8.033483257s
May 31 12:08:15.527: INFO: Pod "downwardapi-volume-585feae1-a34c-4869-ae08-839d04f490ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.045375421s
STEP: Saw pod success
May 31 12:08:15.527: INFO: Pod "downwardapi-volume-585feae1-a34c-4869-ae08-839d04f490ef" satisfied condition "Succeeded or Failed"
May 31 12:08:15.536: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-585feae1-a34c-4869-ae08-839d04f490ef container client-container: <nil>
STEP: delete the pod
May 31 12:08:15.572: INFO: Waiting for pod downwardapi-volume-585feae1-a34c-4869-ae08-839d04f490ef to disappear
May 31 12:08:15.575: INFO: Pod downwardapi-volume-585feae1-a34c-4869-ae08-839d04f490ef no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
May 31 12:08:15.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6835" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":292,"completed":1,"skipped":18,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 97 lines ...
May 31 12:08:47.875: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3616/pods","resourceVersion":"909"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
May 31 12:08:47.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3616" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":292,"completed":2,"skipped":71,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
May 31 12:08:47.925: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on tmpfs
May 31 12:08:48.026: INFO: Waiting up to 5m0s for pod "pod-d2686902-07cb-41aa-b8cc-ad3741b240ed" in namespace "emptydir-7425" to be "Succeeded or Failed"
May 31 12:08:48.040: INFO: Pod "pod-d2686902-07cb-41aa-b8cc-ad3741b240ed": Phase="Pending", Reason="", readiness=false. Elapsed: 13.851704ms
May 31 12:08:50.060: INFO: Pod "pod-d2686902-07cb-41aa-b8cc-ad3741b240ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033571216s
May 31 12:08:52.068: INFO: Pod "pod-d2686902-07cb-41aa-b8cc-ad3741b240ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041624538s
STEP: Saw pod success
May 31 12:08:52.068: INFO: Pod "pod-d2686902-07cb-41aa-b8cc-ad3741b240ed" satisfied condition "Succeeded or Failed"
May 31 12:08:52.072: INFO: Trying to get logs from node kind-worker2 pod pod-d2686902-07cb-41aa-b8cc-ad3741b240ed container test-container: <nil>
STEP: delete the pod
May 31 12:08:52.094: INFO: Waiting for pod pod-d2686902-07cb-41aa-b8cc-ad3741b240ed to disappear
May 31 12:08:52.096: INFO: Pod pod-d2686902-07cb-41aa-b8cc-ad3741b240ed no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
May 31 12:08:52.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7425" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":3,"skipped":96,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 19 lines ...
May 31 12:09:06.290: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
May 31 12:09:06.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8095" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":292,"completed":4,"skipped":155,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 9 lines ...
STEP: Creating the pod
May 31 12:09:10.959: INFO: Successfully updated pod "labelsupdateee7369aa-982a-4c04-9de7-95bb5247e9e3"
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
May 31 12:09:12.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2834" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":292,"completed":5,"skipped":165,"failed":0}
SSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 31 12:09:13.011: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap that has name configmap-test-emptyKey-260e96ac-f6b7-4d6c-b602-1131b1cb9ed7
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:175
May 31 12:09:13.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3938" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":292,"completed":6,"skipped":173,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 39 lines ...
May 31 12:09:27.348: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:40063 --kubeconfig=/root/.kube/kind-test-config explain e2e-test-crd-publish-openapi-2580-crds.spec'
May 31 12:09:28.530: INFO: stderr: ""
May 31 12:09:28.530: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2580-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
May 31 12:09:28.530: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:40063 --kubeconfig=/root/.kube/kind-test-config explain e2e-test-crd-publish-openapi-2580-crds.spec.bars'
May 31 12:09:29.690: INFO: stderr: ""
May 31 12:09:29.690: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2580-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
May 31 12:09:29.690: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:40063 --kubeconfig=/root/.kube/kind-test-config explain e2e-test-crd-publish-openapi-2580-crds.spec.bars2'
May 31 12:09:30.684: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
May 31 12:09:34.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1150" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":292,"completed":7,"skipped":205,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
May 31 12:09:34.278: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on tmpfs
May 31 12:09:34.346: INFO: Waiting up to 5m0s for pod "pod-12eefde1-41e0-443c-a04c-e82190846b28" in namespace "emptydir-6004" to be "Succeeded or Failed"
May 31 12:09:34.352: INFO: Pod "pod-12eefde1-41e0-443c-a04c-e82190846b28": Phase="Pending", Reason="", readiness=false. Elapsed: 5.986119ms
May 31 12:09:36.363: INFO: Pod "pod-12eefde1-41e0-443c-a04c-e82190846b28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01685489s
May 31 12:09:38.375: INFO: Pod "pod-12eefde1-41e0-443c-a04c-e82190846b28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029538009s
STEP: Saw pod success
May 31 12:09:38.376: INFO: Pod "pod-12eefde1-41e0-443c-a04c-e82190846b28" satisfied condition "Succeeded or Failed"
May 31 12:09:38.386: INFO: Trying to get logs from node kind-worker2 pod pod-12eefde1-41e0-443c-a04c-e82190846b28 container test-container: <nil>
STEP: delete the pod
May 31 12:09:38.433: INFO: Waiting for pod pod-12eefde1-41e0-443c-a04c-e82190846b28 to disappear
May 31 12:09:38.444: INFO: Pod pod-12eefde1-41e0-443c-a04c-e82190846b28 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
May 31 12:09:38.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6004" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":8,"skipped":252,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  test/e2e/framework/framework.go:175
May 31 12:09:45.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9224" for this suite.
STEP: Destroying namespace "webhook-9224-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":292,"completed":9,"skipped":263,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 9 lines ...
STEP: creating pod
May 31 12:09:49.515: INFO: Pod pod-hostip-a4bbc90d-539b-4425-bdf0-74faddf363a9 has hostIP: 172.18.0.4
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
May 31 12:09:49.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1159" for this suite.
•{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":292,"completed":10,"skipped":285,"failed":0}

------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 31 12:09:49.574: INFO: Waiting up to 5m0s for pod "downwardapi-volume-334413e4-aa94-4687-957c-57717ec74d81" in namespace "downward-api-6801" to be "Succeeded or Failed"
May 31 12:09:49.583: INFO: Pod "downwardapi-volume-334413e4-aa94-4687-957c-57717ec74d81": Phase="Pending", Reason="", readiness=false. Elapsed: 8.24724ms
May 31 12:09:51.591: INFO: Pod "downwardapi-volume-334413e4-aa94-4687-957c-57717ec74d81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017062305s
May 31 12:09:53.601: INFO: Pod "downwardapi-volume-334413e4-aa94-4687-957c-57717ec74d81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026301615s
STEP: Saw pod success
May 31 12:09:53.602: INFO: Pod "downwardapi-volume-334413e4-aa94-4687-957c-57717ec74d81" satisfied condition "Succeeded or Failed"
May 31 12:09:53.611: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-334413e4-aa94-4687-957c-57717ec74d81 container client-container: <nil>
STEP: delete the pod
May 31 12:09:53.641: INFO: Waiting for pod downwardapi-volume-334413e4-aa94-4687-957c-57717ec74d81 to disappear
May 31 12:09:53.644: INFO: Pod downwardapi-volume-334413e4-aa94-4687-957c-57717ec74d81 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
May 31 12:09:53.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6801" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":292,"completed":11,"skipped":285,"failed":0}

------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 136 lines ...
May 31 12:10:35.395: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3381/pods","resourceVersion":"1604"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
May 31 12:10:35.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3381" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":292,"completed":12,"skipped":285,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 20 lines ...
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
May 31 12:11:04.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7287" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":292,"completed":13,"skipped":303,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
May 31 12:11:04.932: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on tmpfs
May 31 12:11:04.996: INFO: Waiting up to 5m0s for pod "pod-39183e92-790e-4401-83bf-5f882b67b6f4" in namespace "emptydir-770" to be "Succeeded or Failed"
May 31 12:11:05.000: INFO: Pod "pod-39183e92-790e-4401-83bf-5f882b67b6f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.264641ms
May 31 12:11:07.015: INFO: Pod "pod-39183e92-790e-4401-83bf-5f882b67b6f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018756732s
May 31 12:11:09.024: INFO: Pod "pod-39183e92-790e-4401-83bf-5f882b67b6f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027759402s
STEP: Saw pod success
May 31 12:11:09.024: INFO: Pod "pod-39183e92-790e-4401-83bf-5f882b67b6f4" satisfied condition "Succeeded or Failed"
May 31 12:11:09.032: INFO: Trying to get logs from node kind-worker2 pod pod-39183e92-790e-4401-83bf-5f882b67b6f4 container test-container: <nil>
STEP: delete the pod
May 31 12:11:09.076: INFO: Waiting for pod pod-39183e92-790e-4401-83bf-5f882b67b6f4 to disappear
May 31 12:11:09.087: INFO: Pod pod-39183e92-790e-4401-83bf-5f882b67b6f4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
May 31 12:11:09.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-770" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":14,"skipped":357,"failed":0}
SSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
May 31 12:11:13.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5904" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":15,"skipped":360,"failed":0}
SS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 31 12:11:13.428: INFO: Waiting up to 5m0s for pod "downwardapi-volume-172712e3-b4a2-4299-9706-d4bb8b4d191d" in namespace "downward-api-3862" to be "Succeeded or Failed"
May 31 12:11:13.431: INFO: Pod "downwardapi-volume-172712e3-b4a2-4299-9706-d4bb8b4d191d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.213322ms
May 31 12:11:15.440: INFO: Pod "downwardapi-volume-172712e3-b4a2-4299-9706-d4bb8b4d191d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012788882s
May 31 12:11:17.446: INFO: Pod "downwardapi-volume-172712e3-b4a2-4299-9706-d4bb8b4d191d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018207225s
STEP: Saw pod success
May 31 12:11:17.446: INFO: Pod "downwardapi-volume-172712e3-b4a2-4299-9706-d4bb8b4d191d" satisfied condition "Succeeded or Failed"
May 31 12:11:17.450: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-172712e3-b4a2-4299-9706-d4bb8b4d191d container client-container: <nil>
STEP: delete the pod
May 31 12:11:17.468: INFO: Waiting for pod downwardapi-volume-172712e3-b4a2-4299-9706-d4bb8b4d191d to disappear
May 31 12:11:17.471: INFO: Pod downwardapi-volume-172712e3-b4a2-4299-9706-d4bb8b4d191d no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
May 31 12:11:17.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3862" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":16,"skipped":362,"failed":0}
S
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Job
... skipping 21 lines ...
May 31 12:11:26.607: INFO: Pod "adopt-release-6jrbf": Phase="Running", Reason="", readiness=true. Elapsed: 2.015733526s
May 31 12:11:26.607: INFO: Pod "adopt-release-6jrbf" satisfied condition "released"
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:175
May 31 12:11:26.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-8682" for this suite.
•{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":292,"completed":17,"skipped":363,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-secret-p9gd
STEP: Creating a pod to test atomic-volume-subpath
May 31 12:11:26.738: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-p9gd" in namespace "subpath-1383" to be "Succeeded or Failed"
May 31 12:11:26.746: INFO: Pod "pod-subpath-test-secret-p9gd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.311146ms
May 31 12:11:28.768: INFO: Pod "pod-subpath-test-secret-p9gd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029952986s
May 31 12:11:30.778: INFO: Pod "pod-subpath-test-secret-p9gd": Phase="Running", Reason="", readiness=true. Elapsed: 4.040268999s
May 31 12:11:32.784: INFO: Pod "pod-subpath-test-secret-p9gd": Phase="Running", Reason="", readiness=true. Elapsed: 6.045983933s
May 31 12:11:34.792: INFO: Pod "pod-subpath-test-secret-p9gd": Phase="Running", Reason="", readiness=true. Elapsed: 8.05360679s
May 31 12:11:36.800: INFO: Pod "pod-subpath-test-secret-p9gd": Phase="Running", Reason="", readiness=true. Elapsed: 10.062008494s
... skipping 2 lines ...
May 31 12:11:42.834: INFO: Pod "pod-subpath-test-secret-p9gd": Phase="Running", Reason="", readiness=true. Elapsed: 16.095722242s
May 31 12:11:44.838: INFO: Pod "pod-subpath-test-secret-p9gd": Phase="Running", Reason="", readiness=true. Elapsed: 18.100282084s
May 31 12:11:46.842: INFO: Pod "pod-subpath-test-secret-p9gd": Phase="Running", Reason="", readiness=true. Elapsed: 20.104157001s
May 31 12:11:48.856: INFO: Pod "pod-subpath-test-secret-p9gd": Phase="Running", Reason="", readiness=true. Elapsed: 22.117720738s
May 31 12:11:50.863: INFO: Pod "pod-subpath-test-secret-p9gd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.12497343s
STEP: Saw pod success
May 31 12:11:50.863: INFO: Pod "pod-subpath-test-secret-p9gd" satisfied condition "Succeeded or Failed"
May 31 12:11:50.873: INFO: Trying to get logs from node kind-worker pod pod-subpath-test-secret-p9gd container test-container-subpath-secret-p9gd: <nil>
STEP: delete the pod
May 31 12:11:50.947: INFO: Waiting for pod pod-subpath-test-secret-p9gd to disappear
May 31 12:11:50.958: INFO: Pod pod-subpath-test-secret-p9gd no longer exists
STEP: Deleting pod pod-subpath-test-secret-p9gd
May 31 12:11:50.958: INFO: Deleting pod "pod-subpath-test-secret-p9gd" in namespace "subpath-1383"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
May 31 12:11:50.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1383" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":292,"completed":18,"skipped":376,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 19 lines ...
May 31 12:12:05.171: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May 31 12:12:05.176: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
May 31 12:12:05.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1866" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":292,"completed":19,"skipped":395,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
May 31 12:12:05.191: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on node default medium
May 31 12:12:05.236: INFO: Waiting up to 5m0s for pod "pod-71ed84c4-28c4-42f2-8951-023d718d6de8" in namespace "emptydir-1368" to be "Succeeded or Failed"
May 31 12:12:05.240: INFO: Pod "pod-71ed84c4-28c4-42f2-8951-023d718d6de8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.381061ms
May 31 12:12:07.249: INFO: Pod "pod-71ed84c4-28c4-42f2-8951-023d718d6de8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012484612s
May 31 12:12:09.256: INFO: Pod "pod-71ed84c4-28c4-42f2-8951-023d718d6de8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020022709s
STEP: Saw pod success
May 31 12:12:09.256: INFO: Pod "pod-71ed84c4-28c4-42f2-8951-023d718d6de8" satisfied condition "Succeeded or Failed"
May 31 12:12:09.264: INFO: Trying to get logs from node kind-worker2 pod pod-71ed84c4-28c4-42f2-8951-023d718d6de8 container test-container: <nil>
STEP: delete the pod
May 31 12:12:09.302: INFO: Waiting for pod pod-71ed84c4-28c4-42f2-8951-023d718d6de8 to disappear
May 31 12:12:09.309: INFO: Pod pod-71ed84c4-28c4-42f2-8951-023d718d6de8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
May 31 12:12:09.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1368" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":20,"skipped":409,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
May 31 12:12:09.790: INFO: stderr: ""
May 31 12:12:09.790: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
May 31 12:12:09.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9758" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":292,"completed":21,"skipped":410,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
... skipping 42 lines ...
May 31 12:12:22.470: INFO: >>> kubeConfig: /root/.kube/kind-test-config
May 31 12:12:22.771: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  test/e2e/framework/framework.go:175
May 31 12:12:22.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-6431" for this suite.
•{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":22,"skipped":446,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
May 31 12:12:25.927: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
May 31 12:12:25.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2387" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":292,"completed":23,"skipped":489,"failed":0}
S
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 44 lines ...
May 31 12:13:06.400: INFO: Deleting pod "simpletest.rc-l26cc" in namespace "gc-8469"
May 31 12:13:06.466: INFO: Deleting pod "simpletest.rc-pgtvb" in namespace "gc-8469"
[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
May 31 12:13:06.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8469" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":292,"completed":24,"skipped":490,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 7 lines ...
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
May 31 12:13:11.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7906" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":292,"completed":25,"skipped":507,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
May 31 12:13:11.480: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on node default medium
May 31 12:13:11.555: INFO: Waiting up to 5m0s for pod "pod-172183ed-4252-46bc-9ca7-8a7d95831191" in namespace "emptydir-1981" to be "Succeeded or Failed"
May 31 12:13:11.564: INFO: Pod "pod-172183ed-4252-46bc-9ca7-8a7d95831191": Phase="Pending", Reason="", readiness=false. Elapsed: 8.102655ms
May 31 12:13:13.576: INFO: Pod "pod-172183ed-4252-46bc-9ca7-8a7d95831191": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019952919s
May 31 12:13:15.595: INFO: Pod "pod-172183ed-4252-46bc-9ca7-8a7d95831191": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038792079s
STEP: Saw pod success
May 31 12:13:15.595: INFO: Pod "pod-172183ed-4252-46bc-9ca7-8a7d95831191" satisfied condition "Succeeded or Failed"
May 31 12:13:15.599: INFO: Trying to get logs from node kind-worker2 pod pod-172183ed-4252-46bc-9ca7-8a7d95831191 container test-container: <nil>
STEP: delete the pod
May 31 12:13:15.640: INFO: Waiting for pod pod-172183ed-4252-46bc-9ca7-8a7d95831191 to disappear
May 31 12:13:15.648: INFO: Pod pod-172183ed-4252-46bc-9ca7-8a7d95831191 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
May 31 12:13:15.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1981" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":26,"skipped":516,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-386a7e36-e3f7-4dda-8192-4a7f66bad7cc
STEP: Creating a pod to test consume configMaps
May 31 12:13:15.751: INFO: Waiting up to 5m0s for pod "pod-configmaps-e115aeb3-2e8e-4df7-8d28-c9221f920aa6" in namespace "configmap-6077" to be "Succeeded or Failed"
May 31 12:13:15.756: INFO: Pod "pod-configmaps-e115aeb3-2e8e-4df7-8d28-c9221f920aa6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.431979ms
May 31 12:13:17.768: INFO: Pod "pod-configmaps-e115aeb3-2e8e-4df7-8d28-c9221f920aa6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016643129s
May 31 12:13:19.780: INFO: Pod "pod-configmaps-e115aeb3-2e8e-4df7-8d28-c9221f920aa6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028587771s
STEP: Saw pod success
May 31 12:13:19.780: INFO: Pod "pod-configmaps-e115aeb3-2e8e-4df7-8d28-c9221f920aa6" satisfied condition "Succeeded or Failed"
May 31 12:13:19.792: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-e115aeb3-2e8e-4df7-8d28-c9221f920aa6 container configmap-volume-test: <nil>
STEP: delete the pod
May 31 12:13:19.855: INFO: Waiting for pod pod-configmaps-e115aeb3-2e8e-4df7-8d28-c9221f920aa6 to disappear
May 31 12:13:19.862: INFO: Pod pod-configmaps-e115aeb3-2e8e-4df7-8d28-c9221f920aa6 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
May 31 12:13:19.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6077" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":27,"skipped":528,"failed":0}

------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 17 lines ...
May 31 12:13:20.029: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-968 /api/v1/namespaces/watch-968/configmaps/e2e-watch-test-watch-closed 38ea63d6-a134-4d43-b63a-f09e1e800c48 2907 0 2020-05-31 12:13:19 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-05-31 12:13:19 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
May 31 12:13:20.030: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-968 /api/v1/namespaces/watch-968/configmaps/e2e-watch-test-watch-closed 38ea63d6-a134-4d43-b63a-f09e1e800c48 2908 0 2020-05-31 12:13:19 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-05-31 12:13:19 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
May 31 12:13:20.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-968" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":292,"completed":28,"skipped":528,"failed":0}

------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 13 lines ...
May 31 12:13:48.252: INFO: Restart count of pod container-probe-1686/liveness-8e75013c-f51e-4f34-9387-1169452152c8 is now 1 (24.101742413s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
May 31 12:13:48.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1686" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":292,"completed":29,"skipped":528,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 9 lines ...
[It] should be possible to delete [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
May 31 12:13:48.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7288" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":292,"completed":30,"skipped":574,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 12 lines ...
STEP: Creating secret with name s-test-opt-create-32e83bcc-8665-4a40-940b-d7643ef93762
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
May 31 12:13:57.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6042" for this suite.
•{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":31,"skipped":611,"failed":0}
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-projected-2hnt
STEP: Creating a pod to test atomic-volume-subpath
May 31 12:13:57.216: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-2hnt" in namespace "subpath-1540" to be "Succeeded or Failed"
May 31 12:13:57.223: INFO: Pod "pod-subpath-test-projected-2hnt": Phase="Pending", Reason="", readiness=false. Elapsed: 7.484977ms
May 31 12:13:59.238: INFO: Pod "pod-subpath-test-projected-2hnt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022605066s
May 31 12:14:01.244: INFO: Pod "pod-subpath-test-projected-2hnt": Phase="Running", Reason="", readiness=true. Elapsed: 4.028659941s
May 31 12:14:03.259: INFO: Pod "pod-subpath-test-projected-2hnt": Phase="Running", Reason="", readiness=true. Elapsed: 6.042723259s
May 31 12:14:05.270: INFO: Pod "pod-subpath-test-projected-2hnt": Phase="Running", Reason="", readiness=true. Elapsed: 8.053834685s
May 31 12:14:07.278: INFO: Pod "pod-subpath-test-projected-2hnt": Phase="Running", Reason="", readiness=true. Elapsed: 10.06250742s
... skipping 2 lines ...
May 31 12:14:13.306: INFO: Pod "pod-subpath-test-projected-2hnt": Phase="Running", Reason="", readiness=true. Elapsed: 16.089738193s
May 31 12:14:15.315: INFO: Pod "pod-subpath-test-projected-2hnt": Phase="Running", Reason="", readiness=true. Elapsed: 18.098783015s
May 31 12:14:17.327: INFO: Pod "pod-subpath-test-projected-2hnt": Phase="Running", Reason="", readiness=true. Elapsed: 20.111627585s
May 31 12:14:19.333: INFO: Pod "pod-subpath-test-projected-2hnt": Phase="Running", Reason="", readiness=true. Elapsed: 22.116781031s
May 31 12:14:21.350: INFO: Pod "pod-subpath-test-projected-2hnt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.134221878s
STEP: Saw pod success
May 31 12:14:21.350: INFO: Pod "pod-subpath-test-projected-2hnt" satisfied condition "Succeeded or Failed"
May 31 12:14:21.359: INFO: Trying to get logs from node kind-worker pod pod-subpath-test-projected-2hnt container test-container-subpath-projected-2hnt: <nil>
STEP: delete the pod
May 31 12:14:21.451: INFO: Waiting for pod pod-subpath-test-projected-2hnt to disappear
May 31 12:14:21.463: INFO: Pod pod-subpath-test-projected-2hnt no longer exists
STEP: Deleting pod pod-subpath-test-projected-2hnt
May 31 12:14:21.464: INFO: Deleting pod "pod-subpath-test-projected-2hnt" in namespace "subpath-1540"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
May 31 12:14:21.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1540" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":292,"completed":32,"skipped":614,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 10 lines ...
May 31 12:14:35.783: INFO: >>> kubeConfig: /root/.kube/kind-test-config
May 31 12:14:39.446: INFO: >>> kubeConfig: /root/.kube/kind-test-config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
May 31 12:14:53.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-971" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":292,"completed":33,"skipped":620,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
May 31 12:15:02.480: INFO: stderr: ""
May 31 12:15:02.480: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7561-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t<string>\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t<string>\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t<Object>\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t<map[string]>\n     Specification of Waldo\n\n   status\t<Object>\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
May 31 12:15:06.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1882" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":292,"completed":34,"skipped":666,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 12 lines ...
STEP: Creating secret with name s-test-opt-create-cbc4e7fa-387f-4fb5-8526-49ba3bc66625
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
May 31 12:16:22.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5097" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":35,"skipped":681,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 15 lines ...
STEP: deleting the pod gracefully
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
May 31 12:16:34.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9502" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":292,"completed":36,"skipped":690,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
May 31 12:16:37.770: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
May 31 12:16:37.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4403" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":292,"completed":37,"skipped":710,"failed":0}

------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
... skipping 9 lines ...
STEP: creating the pod
May 31 12:16:37.938: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
May 31 12:16:42.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2975" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":292,"completed":38,"skipped":710,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-configmap-4zzk
STEP: Creating a pod to test atomic-volume-subpath
May 31 12:16:43.087: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-4zzk" in namespace "subpath-6316" to be "Succeeded or Failed"
May 31 12:16:43.099: INFO: Pod "pod-subpath-test-configmap-4zzk": Phase="Pending", Reason="", readiness=false. Elapsed: 11.156235ms
May 31 12:16:45.106: INFO: Pod "pod-subpath-test-configmap-4zzk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018368519s
May 31 12:16:47.111: INFO: Pod "pod-subpath-test-configmap-4zzk": Phase="Running", Reason="", readiness=true. Elapsed: 4.023458216s
May 31 12:16:49.125: INFO: Pod "pod-subpath-test-configmap-4zzk": Phase="Running", Reason="", readiness=true. Elapsed: 6.037014479s
May 31 12:16:51.135: INFO: Pod "pod-subpath-test-configmap-4zzk": Phase="Running", Reason="", readiness=true. Elapsed: 8.046936629s
May 31 12:16:53.142: INFO: Pod "pod-subpath-test-configmap-4zzk": Phase="Running", Reason="", readiness=true. Elapsed: 10.054274814s
... skipping 2 lines ...
May 31 12:16:59.161: INFO: Pod "pod-subpath-test-configmap-4zzk": Phase="Running", Reason="", readiness=true. Elapsed: 16.073112028s
May 31 12:17:01.164: INFO: Pod "pod-subpath-test-configmap-4zzk": Phase="Running", Reason="", readiness=true. Elapsed: 18.076652794s
May 31 12:17:03.172: INFO: Pod "pod-subpath-test-configmap-4zzk": Phase="Running", Reason="", readiness=true. Elapsed: 20.084673738s
May 31 12:17:05.178: INFO: Pod "pod-subpath-test-configmap-4zzk": Phase="Running", Reason="", readiness=true. Elapsed: 22.090670072s
May 31 12:17:07.188: INFO: Pod "pod-subpath-test-configmap-4zzk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.10073643s
STEP: Saw pod success
May 31 12:17:07.188: INFO: Pod "pod-subpath-test-configmap-4zzk" satisfied condition "Succeeded or Failed"
May 31 12:17:07.195: INFO: Trying to get logs from node kind-worker2 pod pod-subpath-test-configmap-4zzk container test-container-subpath-configmap-4zzk: <nil>
STEP: delete the pod
May 31 12:17:07.232: INFO: Waiting for pod pod-subpath-test-configmap-4zzk to disappear
May 31 12:17:07.240: INFO: Pod pod-subpath-test-configmap-4zzk no longer exists
STEP: Deleting pod pod-subpath-test-configmap-4zzk
May 31 12:17:07.240: INFO: Deleting pod "pod-subpath-test-configmap-4zzk" in namespace "subpath-6316"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
May 31 12:17:07.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6316" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":292,"completed":39,"skipped":718,"failed":0}
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-map-06c1592f-dad2-4404-bb09-cff9090080f5
STEP: Creating a pod to test consume secrets
May 31 12:17:07.347: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-086c4b3c-521c-485b-899c-6d7346bcfc75" in namespace "projected-4408" to be "Succeeded or Failed"
May 31 12:17:07.360: INFO: Pod "pod-projected-secrets-086c4b3c-521c-485b-899c-6d7346bcfc75": Phase="Pending", Reason="", readiness=false. Elapsed: 13.839538ms
May 31 12:17:09.375: INFO: Pod "pod-projected-secrets-086c4b3c-521c-485b-899c-6d7346bcfc75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028258148s
May 31 12:17:11.387: INFO: Pod "pod-projected-secrets-086c4b3c-521c-485b-899c-6d7346bcfc75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040257837s
STEP: Saw pod success
May 31 12:17:11.387: INFO: Pod "pod-projected-secrets-086c4b3c-521c-485b-899c-6d7346bcfc75" satisfied condition "Succeeded or Failed"
May 31 12:17:11.396: INFO: Trying to get logs from node kind-worker2 pod pod-projected-secrets-086c4b3c-521c-485b-899c-6d7346bcfc75 container projected-secret-volume-test: <nil>
STEP: delete the pod
May 31 12:17:11.450: INFO: Waiting for pod pod-projected-secrets-086c4b3c-521c-485b-899c-6d7346bcfc75 to disappear
May 31 12:17:11.463: INFO: Pod pod-projected-secrets-086c4b3c-521c-485b-899c-6d7346bcfc75 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
May 31 12:17:11.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4408" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":40,"skipped":720,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
May 31 12:17:11.487: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
May 31 12:17:11.594: INFO: Waiting up to 5m0s for pod "downward-api-7d93054c-9cf3-49c6-b8e0-8c4360356984" in namespace "downward-api-215" to be "Succeeded or Failed"
May 31 12:17:11.603: INFO: Pod "downward-api-7d93054c-9cf3-49c6-b8e0-8c4360356984": Phase="Pending", Reason="", readiness=false. Elapsed: 8.860904ms
May 31 12:17:13.614: INFO: Pod "downward-api-7d93054c-9cf3-49c6-b8e0-8c4360356984": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019669176s
May 31 12:17:15.623: INFO: Pod "downward-api-7d93054c-9cf3-49c6-b8e0-8c4360356984": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02861101s
STEP: Saw pod success
May 31 12:17:15.623: INFO: Pod "downward-api-7d93054c-9cf3-49c6-b8e0-8c4360356984" satisfied condition "Succeeded or Failed"
May 31 12:17:15.636: INFO: Trying to get logs from node kind-worker2 pod downward-api-7d93054c-9cf3-49c6-b8e0-8c4360356984 container dapi-container: <nil>
STEP: delete the pod
May 31 12:17:15.688: INFO: Waiting for pod downward-api-7d93054c-9cf3-49c6-b8e0-8c4360356984 to disappear
May 31 12:17:15.704: INFO: Pod downward-api-7d93054c-9cf3-49c6-b8e0-8c4360356984 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
May 31 12:17:15.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-215" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":292,"completed":41,"skipped":731,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 24 lines ...
  test/e2e/framework/framework.go:175
May 31 12:17:21.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5342" for this suite.
STEP: Destroying namespace "webhook-5342-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":292,"completed":42,"skipped":734,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  test/e2e/framework/framework.go:175
May 31 12:17:27.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5854" for this suite.
STEP: Destroying namespace "webhook-5854-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":292,"completed":43,"skipped":742,"failed":0}

------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 36 lines ...
May 31 12:17:35.564: INFO: stdout: "service/rm3 exposed\n"
May 31 12:17:35.568: INFO: Service rm3 in namespace kubectl-6074 found.
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
May 31 12:17:37.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6074" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":292,"completed":44,"skipped":742,"failed":0}
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 31 12:17:37.644: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4ce7de3b-f010-4ef8-80b9-18f7504385b4" in namespace "downward-api-1018" to be "Succeeded or Failed"
May 31 12:17:37.652: INFO: Pod "downwardapi-volume-4ce7de3b-f010-4ef8-80b9-18f7504385b4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.656145ms
May 31 12:17:39.666: INFO: Pod "downwardapi-volume-4ce7de3b-f010-4ef8-80b9-18f7504385b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022590548s
May 31 12:17:41.677: INFO: Pod "downwardapi-volume-4ce7de3b-f010-4ef8-80b9-18f7504385b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032979366s
STEP: Saw pod success
May 31 12:17:41.678: INFO: Pod "downwardapi-volume-4ce7de3b-f010-4ef8-80b9-18f7504385b4" satisfied condition "Succeeded or Failed"
May 31 12:17:41.681: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-4ce7de3b-f010-4ef8-80b9-18f7504385b4 container client-container: <nil>
STEP: delete the pod
May 31 12:17:41.698: INFO: Waiting for pod downwardapi-volume-4ce7de3b-f010-4ef8-80b9-18f7504385b4 to disappear
May 31 12:17:41.702: INFO: Pod downwardapi-volume-4ce7de3b-f010-4ef8-80b9-18f7504385b4 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
May 31 12:17:41.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1018" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":292,"completed":45,"skipped":746,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  test/e2e/framework/framework.go:175
May 31 12:17:47.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1782" for this suite.
STEP: Destroying namespace "webhook-1782-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":292,"completed":46,"skipped":766,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 23 lines ...
May 31 12:18:06.063: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May 31 12:18:06.069: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
May 31 12:18:06.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1826" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":292,"completed":47,"skipped":777,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-44ee076b-2cf1-4271-a630-cc456df06281
STEP: Creating a pod to test consume configMaps
May 31 12:18:06.139: INFO: Waiting up to 5m0s for pod "pod-configmaps-c16c5822-b88f-4ecc-b60b-d7dc5ec90f1a" in namespace "configmap-360" to be "Succeeded or Failed"
May 31 12:18:06.142: INFO: Pod "pod-configmaps-c16c5822-b88f-4ecc-b60b-d7dc5ec90f1a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.627276ms
May 31 12:18:08.150: INFO: Pod "pod-configmaps-c16c5822-b88f-4ecc-b60b-d7dc5ec90f1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011495305s
May 31 12:18:10.158: INFO: Pod "pod-configmaps-c16c5822-b88f-4ecc-b60b-d7dc5ec90f1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019231921s
STEP: Saw pod success
May 31 12:18:10.161: INFO: Pod "pod-configmaps-c16c5822-b88f-4ecc-b60b-d7dc5ec90f1a" satisfied condition "Succeeded or Failed"
May 31 12:18:10.174: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-c16c5822-b88f-4ecc-b60b-d7dc5ec90f1a container configmap-volume-test: <nil>
STEP: delete the pod
May 31 12:18:10.216: INFO: Waiting for pod pod-configmaps-c16c5822-b88f-4ecc-b60b-d7dc5ec90f1a to disappear
May 31 12:18:10.227: INFO: Pod pod-configmaps-c16c5822-b88f-4ecc-b60b-d7dc5ec90f1a no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
May 31 12:18:10.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-360" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":292,"completed":48,"skipped":825,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-6ec31151-fd49-4afb-aaaf-733658fa180f
STEP: Creating a pod to test consume configMaps
May 31 12:18:10.340: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2a56f8a7-ca4c-417a-b3b9-2db8993a2d23" in namespace "projected-1526" to be "Succeeded or Failed"
May 31 12:18:10.347: INFO: Pod "pod-projected-configmaps-2a56f8a7-ca4c-417a-b3b9-2db8993a2d23": Phase="Pending", Reason="", readiness=false. Elapsed: 6.206035ms
May 31 12:18:12.362: INFO: Pod "pod-projected-configmaps-2a56f8a7-ca4c-417a-b3b9-2db8993a2d23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021725612s
May 31 12:18:14.368: INFO: Pod "pod-projected-configmaps-2a56f8a7-ca4c-417a-b3b9-2db8993a2d23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027534101s
STEP: Saw pod success
May 31 12:18:14.368: INFO: Pod "pod-projected-configmaps-2a56f8a7-ca4c-417a-b3b9-2db8993a2d23" satisfied condition "Succeeded or Failed"
May 31 12:18:14.376: INFO: Trying to get logs from node kind-worker2 pod pod-projected-configmaps-2a56f8a7-ca4c-417a-b3b9-2db8993a2d23 container projected-configmap-volume-test: <nil>
STEP: delete the pod
May 31 12:18:14.416: INFO: Waiting for pod pod-projected-configmaps-2a56f8a7-ca4c-417a-b3b9-2db8993a2d23 to disappear
May 31 12:18:14.428: INFO: Pod pod-projected-configmaps-2a56f8a7-ca4c-417a-b3b9-2db8993a2d23 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
May 31 12:18:14.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1526" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":292,"completed":49,"skipped":861,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Kubectl server-side dry-run 
  should check if kubectl can dry-run update Pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 22 lines ...
May 31 12:18:25.334: INFO: stderr: ""
May 31 12:18:25.334: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
May 31 12:18:25.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4162" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":292,"completed":50,"skipped":864,"failed":0}

------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 17 lines ...
May 31 12:18:29.780: INFO: >>> kubeConfig: /root/.kube/kind-test-config
May 31 12:18:30.095: INFO: Deleting pod dns-771...
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
May 31 12:18:30.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-771" for this suite.
•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":292,"completed":51,"skipped":864,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
May 31 12:18:34.280: INFO: Initial restart count of pod test-webserver-f33ab94b-a322-4ba0-84c7-2accbb4380d1 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
May 31 12:22:35.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8940" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":292,"completed":52,"skipped":888,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 10 lines ...
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
May 31 12:22:39.954: INFO: Successfully updated pod "pod-update-activedeadlineseconds-d410cead-cf65-4d68-ac46-a06c1155f563"
May 31 12:22:39.954: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-d410cead-cf65-4d68-ac46-a06c1155f563" in namespace "pods-9333" to be "terminated due to deadline exceeded"
May 31 12:22:39.960: INFO: Pod "pod-update-activedeadlineseconds-d410cead-cf65-4d68-ac46-a06c1155f563": Phase="Running", Reason="", readiness=true. Elapsed: 6.351417ms
May 31 12:22:41.970: INFO: Pod "pod-update-activedeadlineseconds-d410cead-cf65-4d68-ac46-a06c1155f563": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.016008959s
May 31 12:22:41.970: INFO: Pod "pod-update-activedeadlineseconds-d410cead-cf65-4d68-ac46-a06c1155f563" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
May 31 12:22:41.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9333" for this suite.
•{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":292,"completed":53,"skipped":904,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
May 31 12:22:41.985: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
May 31 12:22:42.030: INFO: Waiting up to 5m0s for pod "downward-api-34eefb1e-6d42-4eff-8277-391fe45e3f58" in namespace "downward-api-7393" to be "Succeeded or Failed"
May 31 12:22:42.032: INFO: Pod "downward-api-34eefb1e-6d42-4eff-8277-391fe45e3f58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114665ms
May 31 12:22:44.052: INFO: Pod "downward-api-34eefb1e-6d42-4eff-8277-391fe45e3f58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02225358s
May 31 12:22:46.063: INFO: Pod "downward-api-34eefb1e-6d42-4eff-8277-391fe45e3f58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032665831s
STEP: Saw pod success
May 31 12:22:46.063: INFO: Pod "downward-api-34eefb1e-6d42-4eff-8277-391fe45e3f58" satisfied condition "Succeeded or Failed"
May 31 12:22:46.069: INFO: Trying to get logs from node kind-worker2 pod downward-api-34eefb1e-6d42-4eff-8277-391fe45e3f58 container dapi-container: <nil>
STEP: delete the pod
May 31 12:22:46.140: INFO: Waiting for pod downward-api-34eefb1e-6d42-4eff-8277-391fe45e3f58 to disappear
May 31 12:22:46.151: INFO: Pod downward-api-34eefb1e-6d42-4eff-8277-391fe45e3f58 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
May 31 12:22:46.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7393" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":292,"completed":54,"skipped":943,"failed":0}
SSSSS
------------------------------
[sig-network] Services 
  should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 71 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
May 31 12:23:15.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7952" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":292,"completed":55,"skipped":948,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 15 lines ...
  test/e2e/framework/framework.go:175
May 31 12:23:21.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-962" for this suite.
STEP: Destroying namespace "nsdeletetest-4564" for this suite.
May 31 12:23:21.694: INFO: Namespace nsdeletetest-4564 was already deleted
STEP: Destroying namespace "nsdeletetest-4764" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":292,"completed":56,"skipped":958,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
May 31 12:23:21.708: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test override all
May 31 12:23:21.756: INFO: Waiting up to 5m0s for pod "client-containers-1e139f82-e428-4d11-9b47-f8d9100dd48e" in namespace "containers-5462" to be "Succeeded or Failed"
May 31 12:23:21.762: INFO: Pod "client-containers-1e139f82-e428-4d11-9b47-f8d9100dd48e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.593887ms
May 31 12:23:23.775: INFO: Pod "client-containers-1e139f82-e428-4d11-9b47-f8d9100dd48e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01851757s
May 31 12:23:25.779: INFO: Pod "client-containers-1e139f82-e428-4d11-9b47-f8d9100dd48e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022387688s
STEP: Saw pod success
May 31 12:23:25.779: INFO: Pod "client-containers-1e139f82-e428-4d11-9b47-f8d9100dd48e" satisfied condition "Succeeded or Failed"
May 31 12:23:25.784: INFO: Trying to get logs from node kind-worker2 pod client-containers-1e139f82-e428-4d11-9b47-f8d9100dd48e container test-container: <nil>
STEP: delete the pod
May 31 12:23:25.823: INFO: Waiting for pod client-containers-1e139f82-e428-4d11-9b47-f8d9100dd48e to disappear
May 31 12:23:25.829: INFO: Pod client-containers-1e139f82-e428-4d11-9b47-f8d9100dd48e no longer exists
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
May 31 12:23:25.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5462" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":292,"completed":57,"skipped":965,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
May 31 12:23:25.846: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on tmpfs
May 31 12:23:25.897: INFO: Waiting up to 5m0s for pod "pod-1d8c12d4-68c9-4e5c-b060-3b852577f7e0" in namespace "emptydir-45" to be "Succeeded or Failed"
May 31 12:23:25.900: INFO: Pod "pod-1d8c12d4-68c9-4e5c-b060-3b852577f7e0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.275826ms
May 31 12:23:27.924: INFO: Pod "pod-1d8c12d4-68c9-4e5c-b060-3b852577f7e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027471236s
May 31 12:23:29.930: INFO: Pod "pod-1d8c12d4-68c9-4e5c-b060-3b852577f7e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033724224s
STEP: Saw pod success
May 31 12:23:29.930: INFO: Pod "pod-1d8c12d4-68c9-4e5c-b060-3b852577f7e0" satisfied condition "Succeeded or Failed"
May 31 12:23:29.938: INFO: Trying to get logs from node kind-worker2 pod pod-1d8c12d4-68c9-4e5c-b060-3b852577f7e0 container test-container: <nil>
STEP: delete the pod
May 31 12:23:29.983: INFO: Waiting for pod pod-1d8c12d4-68c9-4e5c-b060-3b852577f7e0 to disappear
May 31 12:23:29.991: INFO: Pod pod-1d8c12d4-68c9-4e5c-b060-3b852577f7e0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
May 31 12:23:29.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-45" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":58,"skipped":968,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 31 12:23:30.007: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:162
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating the pod
May 31 12:23:30.053: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
May 31 12:23:38.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7136" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":292,"completed":59,"skipped":987,"failed":0}

------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 24 lines ...
May 31 12:23:39.503: INFO: created pod pod-service-account-nomountsa-nomountspec
May 31 12:23:39.503: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  test/e2e/framework/framework.go:175
May 31 12:23:39.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-6672" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":292,"completed":60,"skipped":987,"failed":0}
SS
------------------------------
[k8s.io] Variable Expansion 
  should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 35 lines ...
May 31 12:25:59.565: INFO: Deleting pod "var-expansion-4721386b-f3dd-4d32-87ed-f477fd646bc7" in namespace "var-expansion-9321"
May 31 12:25:59.573: INFO: Wait up to 5m0s for pod "var-expansion-4721386b-f3dd-4d32-87ed-f477fd646bc7" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
May 31 12:26:35.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9321" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]","total":292,"completed":61,"skipped":989,"failed":0}
SSSSS
------------------------------
[sig-network] Services 
  should find a service from listing all namespaces [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 10 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
May 31 12:26:35.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6015" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":292,"completed":62,"skipped":994,"failed":0}

------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 31 12:26:35.695: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5bbad431-9e6d-46a8-bec6-ef4a0f0d00ad" in namespace "projected-8059" to be "Succeeded or Failed"
May 31 12:26:35.699: INFO: Pod "downwardapi-volume-5bbad431-9e6d-46a8-bec6-ef4a0f0d00ad": Phase="Pending", Reason="", readiness=false. Elapsed: 3.768188ms
May 31 12:26:37.711: INFO: Pod "downwardapi-volume-5bbad431-9e6d-46a8-bec6-ef4a0f0d00ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016261677s
May 31 12:26:39.722: INFO: Pod "downwardapi-volume-5bbad431-9e6d-46a8-bec6-ef4a0f0d00ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02660884s
STEP: Saw pod success
May 31 12:26:39.722: INFO: Pod "downwardapi-volume-5bbad431-9e6d-46a8-bec6-ef4a0f0d00ad" satisfied condition "Succeeded or Failed"
May 31 12:26:39.731: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-5bbad431-9e6d-46a8-bec6-ef4a0f0d00ad container client-container: <nil>
STEP: delete the pod
May 31 12:26:39.775: INFO: Waiting for pod downwardapi-volume-5bbad431-9e6d-46a8-bec6-ef4a0f0d00ad to disappear
May 31 12:26:39.783: INFO: Pod downwardapi-volume-5bbad431-9e6d-46a8-bec6-ef4a0f0d00ad no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
May 31 12:26:39.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8059" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":63,"skipped":994,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 3 lines ...
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  test/e2e/common/pods.go:179
[It] should contain environment variables for services [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
May 31 12:26:43.904: INFO: Waiting up to 5m0s for pod "client-envvars-e7eb842d-28e6-4876-b326-9c0ec57e17f2" in namespace "pods-4788" to be "Succeeded or Failed"
May 31 12:26:43.916: INFO: Pod "client-envvars-e7eb842d-28e6-4876-b326-9c0ec57e17f2": Phase="Pending", Reason="", readiness=false. Elapsed: 11.4589ms
May 31 12:26:45.925: INFO: Pod "client-envvars-e7eb842d-28e6-4876-b326-9c0ec57e17f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020511879s
May 31 12:26:47.947: INFO: Pod "client-envvars-e7eb842d-28e6-4876-b326-9c0ec57e17f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042484905s
STEP: Saw pod success
May 31 12:26:47.947: INFO: Pod "client-envvars-e7eb842d-28e6-4876-b326-9c0ec57e17f2" satisfied condition "Succeeded or Failed"
May 31 12:26:47.956: INFO: Trying to get logs from node kind-worker2 pod client-envvars-e7eb842d-28e6-4876-b326-9c0ec57e17f2 container env3cont: <nil>
STEP: delete the pod
May 31 12:26:48.026: INFO: Waiting for pod client-envvars-e7eb842d-28e6-4876-b326-9c0ec57e17f2 to disappear
May 31 12:26:48.039: INFO: Pod client-envvars-e7eb842d-28e6-4876-b326-9c0ec57e17f2 no longer exists
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
May 31 12:26:48.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4788" for this suite.
•{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":292,"completed":64,"skipped":1001,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name projected-secret-test-a072a594-fa11-459c-831b-9ef9665fa4cb
STEP: Creating a pod to test consume secrets
May 31 12:26:48.183: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b54883e6-3802-42eb-ac65-973e5aaa40e9" in namespace "projected-2859" to be "Succeeded or Failed"
May 31 12:26:48.194: INFO: Pod "pod-projected-secrets-b54883e6-3802-42eb-ac65-973e5aaa40e9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.526944ms
May 31 12:26:50.215: INFO: Pod "pod-projected-secrets-b54883e6-3802-42eb-ac65-973e5aaa40e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031489006s
May 31 12:26:52.220: INFO: Pod "pod-projected-secrets-b54883e6-3802-42eb-ac65-973e5aaa40e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036739608s
STEP: Saw pod success
May 31 12:26:52.220: INFO: Pod "pod-projected-secrets-b54883e6-3802-42eb-ac65-973e5aaa40e9" satisfied condition "Succeeded or Failed"
May 31 12:26:52.224: INFO: Trying to get logs from node kind-worker2 pod pod-projected-secrets-b54883e6-3802-42eb-ac65-973e5aaa40e9 container secret-volume-test: <nil>
STEP: delete the pod
May 31 12:26:52.259: INFO: Waiting for pod pod-projected-secrets-b54883e6-3802-42eb-ac65-973e5aaa40e9 to disappear
May 31 12:26:52.267: INFO: Pod pod-projected-secrets-b54883e6-3802-42eb-ac65-973e5aaa40e9 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
May 31 12:26:52.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2859" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":292,"completed":65,"skipped":1009,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 34 lines ...

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
May 31 12:26:53.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
W0531 12:26:53.409114   11902 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
STEP: Destroying namespace "gc-9816" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":292,"completed":66,"skipped":1013,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 31 12:26:53.564: INFO: Waiting up to 5m0s for pod "downwardapi-volume-113fa66a-4667-4bed-811f-2cdec2c13975" in namespace "projected-6817" to be "Succeeded or Failed"
May 31 12:26:53.569: INFO: Pod "downwardapi-volume-113fa66a-4667-4bed-811f-2cdec2c13975": Phase="Pending", Reason="", readiness=false. Elapsed: 5.429446ms
May 31 12:26:55.575: INFO: Pod "downwardapi-volume-113fa66a-4667-4bed-811f-2cdec2c13975": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011445457s
May 31 12:26:57.584: INFO: Pod "downwardapi-volume-113fa66a-4667-4bed-811f-2cdec2c13975": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020027359s
STEP: Saw pod success
May 31 12:26:57.584: INFO: Pod "downwardapi-volume-113fa66a-4667-4bed-811f-2cdec2c13975" satisfied condition "Succeeded or Failed"
May 31 12:26:57.591: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-113fa66a-4667-4bed-811f-2cdec2c13975 container client-container: <nil>
STEP: delete the pod
May 31 12:26:57.660: INFO: Waiting for pod downwardapi-volume-113fa66a-4667-4bed-811f-2cdec2c13975 to disappear
May 31 12:26:57.671: INFO: Pod downwardapi-volume-113fa66a-4667-4bed-811f-2cdec2c13975 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
May 31 12:26:57.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6817" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":292,"completed":67,"skipped":1026,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
May 31 12:27:00.951: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
May 31 12:27:01.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4780" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":292,"completed":68,"skipped":1038,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 13 lines ...
May 31 12:27:01.210: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-4825 /api/v1/namespaces/watch-4825/configmaps/e2e-watch-test-resource-version 6c3ec4cd-693e-48c2-af89-ce03890257c5 6798 0 2020-05-31 12:27:01 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-05-31 12:27:01 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
May 31 12:27:01.210: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-4825 /api/v1/namespaces/watch-4825/configmaps/e2e-watch-test-resource-version 6c3ec4cd-693e-48c2-af89-ce03890257c5 6799 0 2020-05-31 12:27:01 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-05-31 12:27:01 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
May 31 12:27:01.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4825" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":292,"completed":69,"skipped":1061,"failed":0}
SSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 5 lines ...
[BeforeEach] [sig-network] Services
  test/e2e/network/service.go:809
[It] should serve multiport endpoints from pods  [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating service multi-endpoint-test in namespace services-8508
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8508 to expose endpoints map[]
May 31 12:27:01.324: INFO: Get endpoints failed (14.380102ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
May 31 12:27:02.328: INFO: successfully validated that service multi-endpoint-test in namespace services-8508 exposes endpoints map[] (1.017739079s elapsed)
STEP: Creating pod pod1 in namespace services-8508
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8508 to expose endpoints map[pod1:[100]]
May 31 12:27:05.409: INFO: successfully validated that service multi-endpoint-test in namespace services-8508 exposes endpoints map[pod1:[100]] (3.066608793s elapsed)
STEP: Creating pod pod2 in namespace services-8508
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8508 to expose endpoints map[pod1:[100] pod2:[101]]
... skipping 7 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
May 31 12:27:09.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8508" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":292,"completed":70,"skipped":1065,"failed":0}

------------------------------
[sig-network] Services 
  should have session affinity work for NodePort service [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 52 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
May 31 12:27:35.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9375" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":292,"completed":71,"skipped":1065,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
STEP: Updating configmap configmap-test-upd-034b58d1-fb03-46bf-9cf1-85a6f3c289cb
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
May 31 12:28:46.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1816" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":72,"skipped":1130,"failed":0}
SSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
May 31 12:28:50.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8050" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":73,"skipped":1135,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
May 31 12:28:50.308: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in container's command
May 31 12:28:50.350: INFO: Waiting up to 5m0s for pod "var-expansion-82cc0045-518c-48ec-b0a5-9e9a63a6f9b5" in namespace "var-expansion-314" to be "Succeeded or Failed"
May 31 12:28:50.354: INFO: Pod "var-expansion-82cc0045-518c-48ec-b0a5-9e9a63a6f9b5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.464167ms
May 31 12:28:52.358: INFO: Pod "var-expansion-82cc0045-518c-48ec-b0a5-9e9a63a6f9b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008128098s
May 31 12:28:54.363: INFO: Pod "var-expansion-82cc0045-518c-48ec-b0a5-9e9a63a6f9b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012444093s
STEP: Saw pod success
May 31 12:28:54.363: INFO: Pod "var-expansion-82cc0045-518c-48ec-b0a5-9e9a63a6f9b5" satisfied condition "Succeeded or Failed"
May 31 12:28:54.366: INFO: Trying to get logs from node kind-worker pod var-expansion-82cc0045-518c-48ec-b0a5-9e9a63a6f9b5 container dapi-container: <nil>
STEP: delete the pod
May 31 12:28:54.400: INFO: Waiting for pod var-expansion-82cc0045-518c-48ec-b0a5-9e9a63a6f9b5 to disappear
May 31 12:28:54.405: INFO: Pod var-expansion-82cc0045-518c-48ec-b0a5-9e9a63a6f9b5 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
May 31 12:28:54.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-314" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":292,"completed":74,"skipped":1144,"failed":0}
SSSSS
------------------------------
[sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath 
  runs ReplicaSets to verify preemption running path [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 32 lines ...
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  test/e2e/framework/framework.go:175
May 31 12:30:33.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-preemption-8020" for this suite.
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  test/e2e/scheduling/preemption.go:75
•{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":292,"completed":75,"skipped":1149,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 10 lines ...
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
May 31 12:30:38.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9157" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":292,"completed":76,"skipped":1181,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
May 31 12:30:38.143: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-5f21d3aa-8bbf-4eca-9c3c-2793732547df" in namespace "security-context-test-9198" to be "Succeeded or Failed"
May 31 12:30:38.147: INFO: Pod "busybox-privileged-false-5f21d3aa-8bbf-4eca-9c3c-2793732547df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.574449ms
May 31 12:30:40.156: INFO: Pod "busybox-privileged-false-5f21d3aa-8bbf-4eca-9c3c-2793732547df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012684583s
May 31 12:30:42.164: INFO: Pod "busybox-privileged-false-5f21d3aa-8bbf-4eca-9c3c-2793732547df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020913907s
May 31 12:30:42.164: INFO: Pod "busybox-privileged-false-5f21d3aa-8bbf-4eca-9c3c-2793732547df" satisfied condition "Succeeded or Failed"
May 31 12:30:42.192: INFO: Got logs for pod "busybox-privileged-false-5f21d3aa-8bbf-4eca-9c3c-2793732547df": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
May 31 12:30:42.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9198" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":77,"skipped":1197,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
May 31 12:30:46.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-441" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":78,"skipped":1204,"failed":0}
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 31 12:30:46.400: INFO: Waiting up to 5m0s for pod "downwardapi-volume-512b5916-f12c-48bc-ad39-0a7369b64fe8" in namespace "downward-api-9424" to be "Succeeded or Failed"
May 31 12:30:46.404: INFO: Pod "downwardapi-volume-512b5916-f12c-48bc-ad39-0a7369b64fe8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.518112ms
May 31 12:30:48.415: INFO: Pod "downwardapi-volume-512b5916-f12c-48bc-ad39-0a7369b64fe8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014965299s
May 31 12:30:50.427: INFO: Pod "downwardapi-volume-512b5916-f12c-48bc-ad39-0a7369b64fe8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026880467s
STEP: Saw pod success
May 31 12:30:50.427: INFO: Pod "downwardapi-volume-512b5916-f12c-48bc-ad39-0a7369b64fe8" satisfied condition "Succeeded or Failed"
May 31 12:30:50.436: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-512b5916-f12c-48bc-ad39-0a7369b64fe8 container client-container: <nil>
STEP: delete the pod
May 31 12:30:50.482: INFO: Waiting for pod downwardapi-volume-512b5916-f12c-48bc-ad39-0a7369b64fe8 to disappear
May 31 12:30:50.487: INFO: Pod downwardapi-volume-512b5916-f12c-48bc-ad39-0a7369b64fe8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
May 31 12:30:50.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9424" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":292,"completed":79,"skipped":1209,"failed":0}
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 12 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-1428
STEP: Creating statefulset with conflicting port in namespace statefulset-1428
STEP: Waiting until pod test-pod will start running in namespace statefulset-1428
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1428
May 31 12:30:56.651: INFO: Observed stateful pod in namespace: statefulset-1428, name: ss-0, uid: ba39bbb8-a423-44ec-9074-93e9e6157104, status phase: Pending. Waiting for statefulset controller to delete.
May 31 12:30:56.828: INFO: Observed stateful pod in namespace: statefulset-1428, name: ss-0, uid: ba39bbb8-a423-44ec-9074-93e9e6157104, status phase: Failed. Waiting for statefulset controller to delete.
May 31 12:30:56.859: INFO: Observed stateful pod in namespace: statefulset-1428, name: ss-0, uid: ba39bbb8-a423-44ec-9074-93e9e6157104, status phase: Failed. Waiting for statefulset controller to delete.
May 31 12:30:56.879: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1428
STEP: Removing pod with conflicting port in namespace statefulset-1428
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1428 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/apps/statefulset.go:114
May 31 12:31:02.984: INFO: Deleting all statefulset in ns statefulset-1428
May 31 12:31:02.995: INFO: Scaling statefulset ss to 0
May 31 12:31:23.045: INFO: Waiting for statefulset status.replicas updated to 0
May 31 12:31:23.050: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
May 31 12:31:23.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1428" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":292,"completed":80,"skipped":1213,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Job
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 31 12:31:23.076: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:175
May 31 12:31:33.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-642" for this suite.
•{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":292,"completed":81,"skipped":1258,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
May 31 12:31:33.648: INFO: stderr: ""
May 31 12:31:33.648: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://127.0.0.1:40063\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://127.0.0.1:40063/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
May 31 12:31:33.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4638" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":292,"completed":82,"skipped":1291,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] Pods Extended
... skipping 10 lines ...
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  test/e2e/framework/framework.go:175
May 31 12:31:33.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6439" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":292,"completed":83,"skipped":1305,"failed":0}
SSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
May 31 12:31:49.886: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7981.svc.cluster.local from pod dns-7981/dns-test-5b6316aa-6e04-41f2-8111-524004f9437d: the server could not find the requested resource (get pods dns-test-5b6316aa-6e04-41f2-8111-524004f9437d)
May 31 12:31:49.891: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7981.svc.cluster.local from pod dns-7981/dns-test-5b6316aa-6e04-41f2-8111-524004f9437d: the server could not find the requested resource (get pods dns-test-5b6316aa-6e04-41f2-8111-524004f9437d)
May 31 12:31:49.944: INFO: Unable to read jessie_udp@dns-test-service.dns-7981.svc.cluster.local from pod dns-7981/dns-test-5b6316aa-6e04-41f2-8111-524004f9437d: the server could not find the requested resource (get pods dns-test-5b6316aa-6e04-41f2-8111-524004f9437d)
May 31 12:31:49.951: INFO: Unable to read jessie_tcp@dns-test-service.dns-7981.svc.cluster.local from pod dns-7981/dns-test-5b6316aa-6e04-41f2-8111-524004f9437d: the server could not find the requested resource (get pods dns-test-5b6316aa-6e04-41f2-8111-524004f9437d)
May 31 12:31:49.963: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7981.svc.cluster.local from pod dns-7981/dns-test-5b6316aa-6e04-41f2-8111-524004f9437d: the server could not find the requested resource (get pods dns-test-5b6316aa-6e04-41f2-8111-524004f9437d)
May 31 12:31:49.974: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7981.svc.cluster.local from pod dns-7981/dns-test-5b6316aa-6e04-41f2-8111-524004f9437d: the server could not find the requested resource (get pods dns-test-5b6316aa-6e04-41f2-8111-524004f9437d)
May 31 12:31:50.020: INFO: Lookups using dns-7981/dns-test-5b6316aa-6e04-41f2-8111-524004f9437d failed for: [wheezy_udp@dns-test-service.dns-7981.svc.cluster.local wheezy_tcp@dns-test-service.dns-7981.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7981.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7981.svc.cluster.local jessie_udp@dns-test-service.dns-7981.svc.cluster.local jessie_tcp@dns-test-service.dns-7981.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7981.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7981.svc.cluster.local]

May 31 12:31:55.033: INFO: Unable to read wheezy_udp@dns-test-service.dns-7981.svc.cluster.local from pod dns-7981/dns-test-5b6316aa-6e04-41f2-8111-524004f9437d: the server could not find the requested resource (get pods dns-test-5b6316aa-6e04-41f2-8111-524004f9437d)
May 31 12:31:55.041: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7981.svc.cluster.local from pod dns-7981/dns-test-5b6316aa-6e04-41f2-8111-524004f9437d: the server could not find the requested resource (get pods dns-test-5b6316aa-6e04-41f2-8111-524004f9437d)
May 31 12:31:55.048: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7981.svc.cluster.local from pod dns-7981/dns-test-5b6316aa-6e04-41f2-8111-524004f9437d: the server could not find the requested resource (get pods dns-test-5b6316aa-6e04-41f2-8111-524004f9437d)
May 31 12:31:55.056: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7981.svc.cluster.local from pod dns-7981/dns-test-5b6316aa-6e04-41f2-8111-524004f9437d: the server could not find the requested resource (get pods dns-test-5b6316aa-6e04-41f2-8111-524004f9437d)
May 31 12:31:55.092: INFO: Unable to read jessie_udp@dns-test-service.dns-7981.svc.cluster.local from pod dns-7981/dns-test-5b6316aa-6e04-41f2-8111-524004f9437d: the server could not find the requested resource (get pods dns-test-5b6316aa-6e04-41f2-8111-524004f9437d)
May 31 12:31:55.096: INFO: Unable to read jessie_tcp@dns-test-service.dns-7981.svc.cluster.local from pod dns-7981/dns-test-5b6316aa-6e04-41f2-8111-524004f9437d: the server could not find the requested resource (get pods dns-test-5b6316aa-6e04-41f2-8111-524004f9437d)
May 31 12:31:55.100: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7981.svc.cluster.local from pod dns-7981/dns-test-5b6316aa-6e04-41f2-8111-524004f9437d: the server could not find the requested resource (get pods dns-test-5b6316aa-6e04-41f2-8111-524004f9437d)
May 31 12:31:55.103: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7981.svc.cluster.local from pod dns-7981/dns-test-5b6316aa-6e04-41f2-8111-524004f9437d: the server could not find the requested resource (get pods dns-test-5b6316aa-6e04-41f2-8111-524004f9437d)
May 31 12:31:55.127: INFO: Lookups using dns-7981/dns-test-5b6316aa-6e04-41f2-8111-524004f9437d failed for: [wheezy_udp@dns-test-service.dns-7981.svc.cluster.local wheezy_tcp@dns-test-service.dns-7981.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7981.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7981.svc.cluster.local jessie_udp@dns-test-service.dns-7981.svc.cluster.local jessie_tcp@dns-test-service.dns-7981.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7981.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7981.svc.cluster.local]

May 31 12:32:00.038: INFO: Unable to read wheezy_udp@dns-test-service.dns-7981.svc.cluster.local from pod dns-7981/dns-test-5b6316aa-6e04-41f2-8111-524004f9437d: the server could not find the requested resource (get pods dns-test-5b6316aa-6e04-41f2-8111-524004f9437d)
May 31 12:32:00.046: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7981.svc.cluster.local from pod dns-7981/dns-test-5b6316aa-6e04-41f2-8111-524004f9437d: the server could not find the requested resource (get pods dns-test-5b6316aa-6e04-41f2-8111-524004f9437d)
May 31 12:32:00.056: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7981.svc.cluster.local from pod dns-7981/dns-test-5b6316aa-6e04-41f2-8111-524004f9437d: the server could not find the requested resource (get pods dns-test-5b6316aa-6e04-41f2-8111-524004f9437d)
May 31 12:32:00.065: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7981.svc.cluster.local from pod dns-7981/dns-test-5b6316aa-6e04-41f2-8111-524004f9437d: the server could not find the requested resource (get pods dns-test-5b6316aa-6e04-41f2-8111-524004f9437d)
May 31 12:32:00.116: INFO: Unable to read jessie_udp@dns-test-service.dns-7981.svc.cluster.local from pod dns-7981/dns-test-5b6316aa-6e04-41f2-8111-524004f9437d: the server could not find the requested resource (get pods dns-test-5b6316aa-6e04-41f2-8111-524004f9437d)
May 31 12:32:00.127: INFO: Unable to read jessie_tcp@dns-test-service.dns-7981.svc.cluster.local from pod dns-7981/dns-test-5b6316aa-6e04-41f2-8111-524004f9437d: the server could not find the requested resource (get pods dns-test-5b6316aa-6e04-41f2-8111-524004f9437d)
May 31 12:32:00.136: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7981.svc.cluster.local from pod dns-7981/dns-test-5b6316aa-6e04-41f2-8111-524004f9437d: the server could not find the requested resource (get pods dns-test-5b6316aa-6e04-41f2-8111-524004f9437d)
May 31 12:32:00.147: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7981.svc.cluster.local from pod dns-7981/dns-test-5b6316aa-6e04-41f2-8111-524004f9437d: the server could not find the requested resource (get pods dns-test-5b6316aa-6e04-41f2-8111-524004f9437d)
May 31 12:32:00.202: INFO: Lookups using dns-7981/dns-test-5b6316aa-6e04-41f2-8111-524004f9437d failed for: [wheezy_udp@dns-test-service.dns-7981.svc.cluster.local wheezy_tcp@dns-test-service.dns-7981.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7981.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7981.svc.cluster.local jessie_udp@dns-test-service.dns-7981.svc.cluster.local jessie_tcp@dns-test-service.dns-7981.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7981.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7981.svc.cluster.local]

May 31 12:32:05.040: INFO: Unable to read wheezy_udp@dns-test-service.dns-7981.svc.cluster.local from pod dns-7981/dns-test-5b6316aa-6e04-41f2-8111-524004f9437d: the server could not find the requested resource (get pods dns-test-5b6316aa-6e04-41f2-8111-524004f9437d)
May 31 12:32:05.051: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7981.svc.cluster.local from pod dns-7981/dns-test-5b6316aa-6e04-41f2-8111-524004f9437d: the server could not find the requested resource (get pods dns-test-5b6316aa-6e04-41f2-8111-524004f9437d)
May 31 12:32:05.071: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7981.svc.cluster.local from pod dns-7981/dns-test-5b6316aa-6e04-41f2-8111-524004f9437d: the server could not find the requested resource (get pods dns-test-5b6316aa-6e04-41f2-8111-524004f9437d)
May 31 12:32:05.091: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7981.svc.cluster.local from pod dns-7981/dns-test-5b6316aa-6e04-41f2-8111-524004f9437d: the server could not find the requested resource (get pods dns-test-5b6316aa-6e04-41f2-8111-524004f9437d)
May 31 12:32:05.154: INFO: Unable to read jessie_udp@dns-test-service.dns-7981.svc.cluster.local from pod dns-7981/dns-test-5b6316aa-6e04-41f2-8111-524004f9437d: the server could not find the requested resource (get pods dns-test-5b6316aa-6e04-41f2-8111-524004f9437d)
May 31 12:32:05.160: INFO: Unable to read jessie_tcp@dns-test-service.dns-7981.svc.cluster.local from pod dns-7981/dns-test-5b6316aa-6e04-41f2-8111-524004f9437d: the server could not find the requested resource (get pods dns-test-5b6316aa-6e04-41f2-8111-524004f9437d)
May 31 12:32:05.166: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7981.svc.cluster.local from pod dns-7981/dns-test-5b6316aa-6e04-41f2-8111-524004f9437d: the server could not find the requested resource (get pods dns-test-5b6316aa-6e04-41f2-8111-524004f9437d)
May 31 12:32:05.170: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7981.svc.cluster.local from pod dns-7981/dns-test-5b6316aa-6e04-41f2-8111-524004f9437d: the server could not find the requested resource (get pods dns-test-5b6316aa-6e04-41f2-8111-524004f9437d)
May 31 12:32:05.209: INFO: Lookups using dns-7981/dns-test-5b6316aa-6e04-41f2-8111-524004f9437d failed for: [wheezy_udp@dns-test-service.dns-7981.svc.cluster.local wheezy_tcp@dns-test-service.dns-7981.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7981.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7981.svc.cluster.local jessie_udp@dns-test-service.dns-7981.svc.cluster.local jessie_tcp@dns-test-service.dns-7981.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7981.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7981.svc.cluster.local]

May 31 12:32:10.147: INFO: DNS probes using dns-7981/dns-test-5b6316aa-6e04-41f2-8111-524004f9437d succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
May 31 12:32:10.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7981" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":292,"completed":84,"skipped":1310,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 8 lines ...
May 31 12:32:10.372: INFO: >>> kubeConfig: /root/.kube/kind-test-config
May 31 12:32:14.064: INFO: >>> kubeConfig: /root/.kube/kind-test-config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
May 31 12:32:28.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-319" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":292,"completed":85,"skipped":1324,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 100 lines ...
May 31 12:34:13.832: INFO: Waiting for statefulset status.replicas updated to 0
May 31 12:34:13.835: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
May 31 12:34:13.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8266" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":292,"completed":86,"skipped":1337,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 26 lines ...
May 31 12:35:04.026: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-4630 /api/v1/namespaces/watch-4630/configmaps/e2e-watch-test-configmap-b 7237f52c-b196-4ab6-a11e-dbc1e024fa04 9410 0 2020-05-31 12:34:53 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-05-31 12:34:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
May 31 12:35:04.027: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-4630 /api/v1/namespaces/watch-4630/configmaps/e2e-watch-test-configmap-b 7237f52c-b196-4ab6-a11e-dbc1e024fa04 9410 0 2020-05-31 12:34:53 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-05-31 12:34:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
May 31 12:35:14.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4630" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":292,"completed":87,"skipped":1348,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 31 12:35:14.201: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6ad0bb6e-dc48-4e82-9352-5f5958589cc7" in namespace "downward-api-1667" to be "Succeeded or Failed"
May 31 12:35:14.207: INFO: Pod "downwardapi-volume-6ad0bb6e-dc48-4e82-9352-5f5958589cc7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.937174ms
May 31 12:35:16.215: INFO: Pod "downwardapi-volume-6ad0bb6e-dc48-4e82-9352-5f5958589cc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014343626s
May 31 12:35:18.219: INFO: Pod "downwardapi-volume-6ad0bb6e-dc48-4e82-9352-5f5958589cc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018506449s
STEP: Saw pod success
May 31 12:35:18.219: INFO: Pod "downwardapi-volume-6ad0bb6e-dc48-4e82-9352-5f5958589cc7" satisfied condition "Succeeded or Failed"
May 31 12:35:18.223: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-6ad0bb6e-dc48-4e82-9352-5f5958589cc7 container client-container: <nil>
STEP: delete the pod
May 31 12:35:18.267: INFO: Waiting for pod downwardapi-volume-6ad0bb6e-dc48-4e82-9352-5f5958589cc7 to disappear
May 31 12:35:18.272: INFO: Pod downwardapi-volume-6ad0bb6e-dc48-4e82-9352-5f5958589cc7 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
May 31 12:35:18.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1667" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":88,"skipped":1369,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
May 31 12:35:18.284: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir volume type on node default medium
May 31 12:35:18.328: INFO: Waiting up to 5m0s for pod "pod-062a9fbb-1425-4bab-8202-c8bb8ae12a82" in namespace "emptydir-7646" to be "Succeeded or Failed"
May 31 12:35:18.334: INFO: Pod "pod-062a9fbb-1425-4bab-8202-c8bb8ae12a82": Phase="Pending", Reason="", readiness=false. Elapsed: 5.731058ms
May 31 12:35:20.352: INFO: Pod "pod-062a9fbb-1425-4bab-8202-c8bb8ae12a82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023567152s
May 31 12:35:22.356: INFO: Pod "pod-062a9fbb-1425-4bab-8202-c8bb8ae12a82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028160089s
STEP: Saw pod success
May 31 12:35:22.356: INFO: Pod "pod-062a9fbb-1425-4bab-8202-c8bb8ae12a82" satisfied condition "Succeeded or Failed"
May 31 12:35:22.363: INFO: Trying to get logs from node kind-worker2 pod pod-062a9fbb-1425-4bab-8202-c8bb8ae12a82 container test-container: <nil>
STEP: delete the pod
May 31 12:35:22.388: INFO: Waiting for pod pod-062a9fbb-1425-4bab-8202-c8bb8ae12a82 to disappear
May 31 12:35:22.392: INFO: Pod pod-062a9fbb-1425-4bab-8202-c8bb8ae12a82 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
May 31 12:35:22.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7646" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":89,"skipped":1374,"failed":0}

------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should print the output to logs [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
May 31 12:35:26.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9311" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":292,"completed":90,"skipped":1374,"failed":0}
SSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
May 31 12:35:26.507: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in container's args
May 31 12:35:26.599: INFO: Waiting up to 5m0s for pod "var-expansion-40e73469-dd2d-471c-bfc7-9619c9eb2a35" in namespace "var-expansion-7708" to be "Succeeded or Failed"
May 31 12:35:26.607: INFO: Pod "var-expansion-40e73469-dd2d-471c-bfc7-9619c9eb2a35": Phase="Pending", Reason="", readiness=false. Elapsed: 7.84354ms
May 31 12:35:28.614: INFO: Pod "var-expansion-40e73469-dd2d-471c-bfc7-9619c9eb2a35": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015004209s
May 31 12:35:30.620: INFO: Pod "var-expansion-40e73469-dd2d-471c-bfc7-9619c9eb2a35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021246689s
STEP: Saw pod success
May 31 12:35:30.620: INFO: Pod "var-expansion-40e73469-dd2d-471c-bfc7-9619c9eb2a35" satisfied condition "Succeeded or Failed"
May 31 12:35:30.626: INFO: Trying to get logs from node kind-worker2 pod var-expansion-40e73469-dd2d-471c-bfc7-9619c9eb2a35 container dapi-container: <nil>
STEP: delete the pod
May 31 12:35:30.641: INFO: Waiting for pod var-expansion-40e73469-dd2d-471c-bfc7-9619c9eb2a35 to disappear
May 31 12:35:30.644: INFO: Pod var-expansion-40e73469-dd2d-471c-bfc7-9619c9eb2a35 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
May 31 12:35:30.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7708" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":292,"completed":91,"skipped":1377,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 12 lines ...
May 31 12:35:34.724: INFO: >>> kubeConfig: /root/.kube/kind-test-config
May 31 12:35:34.947: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
May 31 12:35:34.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4347" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":292,"completed":92,"skipped":1463,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-e9651701-637f-49d4-a354-0df8f83630d4
STEP: Creating a pod to test consume configMaps
May 31 12:35:35.027: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ee766b3b-2a34-4b36-8df5-61913fca6cad" in namespace "projected-2636" to be "Succeeded or Failed"
May 31 12:35:35.031: INFO: Pod "pod-projected-configmaps-ee766b3b-2a34-4b36-8df5-61913fca6cad": Phase="Pending", Reason="", readiness=false. Elapsed: 3.708543ms
May 31 12:35:37.040: INFO: Pod "pod-projected-configmaps-ee766b3b-2a34-4b36-8df5-61913fca6cad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012708229s
May 31 12:35:39.049: INFO: Pod "pod-projected-configmaps-ee766b3b-2a34-4b36-8df5-61913fca6cad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021736806s
STEP: Saw pod success
May 31 12:35:39.050: INFO: Pod "pod-projected-configmaps-ee766b3b-2a34-4b36-8df5-61913fca6cad" satisfied condition "Succeeded or Failed"
May 31 12:35:39.059: INFO: Trying to get logs from node kind-worker pod pod-projected-configmaps-ee766b3b-2a34-4b36-8df5-61913fca6cad container projected-configmap-volume-test: <nil>
STEP: delete the pod
May 31 12:35:39.122: INFO: Waiting for pod pod-projected-configmaps-ee766b3b-2a34-4b36-8df5-61913fca6cad to disappear
May 31 12:35:39.127: INFO: Pod pod-projected-configmaps-ee766b3b-2a34-4b36-8df5-61913fca6cad no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
May 31 12:35:39.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2636" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":292,"completed":93,"skipped":1480,"failed":0}
SSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
May 31 12:35:46.206: INFO: stderr: ""
May 31 12:35:46.206: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-3450-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     <empty>\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
May 31 12:35:49.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5582" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":292,"completed":94,"skipped":1484,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 27 lines ...
  test/e2e/framework/framework.go:175
May 31 12:35:57.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5063" for this suite.
STEP: Destroying namespace "webhook-5063-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":292,"completed":95,"skipped":1506,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
May 31 12:36:08.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-823" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":292,"completed":96,"skipped":1533,"failed":0}
SSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
... skipping 18 lines ...
STEP: Deleting second CR
May 31 12:36:59.247: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-31T12:36:19Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-31T12:36:39Z]] name:name2 resourceVersion:10037 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:1815a3be-cab0-4a20-83bb-1d0bef57a7ea] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
May 31 12:37:09.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-2839" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":292,"completed":97,"skipped":1537,"failed":0}
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-map-55ddd800-5091-4f27-a16f-202957eee96e
STEP: Creating a pod to test consume secrets
May 31 12:37:09.860: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3af868fb-312d-4044-b56c-ea5ae6c86bce" in namespace "projected-6969" to be "Succeeded or Failed"
May 31 12:37:09.867: INFO: Pod "pod-projected-secrets-3af868fb-312d-4044-b56c-ea5ae6c86bce": Phase="Pending", Reason="", readiness=false. Elapsed: 7.474559ms
May 31 12:37:11.873: INFO: Pod "pod-projected-secrets-3af868fb-312d-4044-b56c-ea5ae6c86bce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012704127s
May 31 12:37:13.878: INFO: Pod "pod-projected-secrets-3af868fb-312d-4044-b56c-ea5ae6c86bce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017670572s
STEP: Saw pod success
May 31 12:37:13.878: INFO: Pod "pod-projected-secrets-3af868fb-312d-4044-b56c-ea5ae6c86bce" satisfied condition "Succeeded or Failed"
May 31 12:37:13.888: INFO: Trying to get logs from node kind-worker2 pod pod-projected-secrets-3af868fb-312d-4044-b56c-ea5ae6c86bce container projected-secret-volume-test: <nil>
STEP: delete the pod
May 31 12:37:13.938: INFO: Waiting for pod pod-projected-secrets-3af868fb-312d-4044-b56c-ea5ae6c86bce to disappear
May 31 12:37:13.946: INFO: Pod pod-projected-secrets-3af868fb-312d-4044-b56c-ea5ae6c86bce no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
May 31 12:37:13.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6969" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":292,"completed":98,"skipped":1541,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 24 lines ...
May 31 12:37:19.326: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
May 31 12:37:19.326: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:40063 --kubeconfig=/root/.kube/kind-test-config describe pod agnhost-master-jnpwt --namespace=kubectl-9011'
May 31 12:37:19.770: INFO: stderr: ""
May 31 12:37:19.771: INFO: stdout: "Name:         agnhost-master-jnpwt\nNamespace:    kubectl-9011\nPriority:     0\nNode:         kind-worker2/172.18.0.4\nStart Time:   Sun, 31 May 2020 12:37:15 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  <none>\nStatus:       Running\nIP:           10.244.2.105\nIPs:\n  IP:           10.244.2.105\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://5d358ba1892593373599cf7b714b9a14e8640dfe6bb0dd0cef2e9066c3afea80\n    Image:          us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n    Image ID:       us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sun, 31 May 2020 12:37:17 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-h28xx (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-h28xx:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-h28xx\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  <none>\nTolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From                   Message\n  ----    ------     ----  ----                   -------\n  Normal  Scheduled  4s    default-scheduler      Successfully assigned kubectl-9011/agnhost-master-jnpwt to kind-worker2\n  Normal  Pulled     2s    kubelet, kind-worker2  Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\" already present on machine\n  Normal  Created    2s    kubelet, kind-worker2  Created container agnhost-master\n  Normal  Started    2s    kubelet, kind-worker2  Started container agnhost-master\n"
May 31 12:37:19.771: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:40063 --kubeconfig=/root/.kube/kind-test-config describe rc agnhost-master --namespace=kubectl-9011'
May 31 12:37:20.200: INFO: stderr: ""
May 31 12:37:20.201: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-9011\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  5s    replication-controller  Created pod: agnhost-master-jnpwt\n"
May 31 12:37:20.202: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:40063 --kubeconfig=/root/.kube/kind-test-config describe service agnhost-master --namespace=kubectl-9011'
May 31 12:37:20.664: INFO: stderr: ""
May 31 12:37:20.664: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-9011\nLabels:            app=agnhost\n                   role=master\nAnnotations:       <none>\nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.107.102.136\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.244.2.105:6379\nSession Affinity:  None\nEvents:            <none>\n"
May 31 12:37:20.681: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:40063 --kubeconfig=/root/.kube/kind-test-config describe node kind-control-plane'
May 31 12:37:21.155: INFO: stderr: ""
May 31 12:37:21.155: INFO: stdout: "Name:               kind-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=kind-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 31 May 2020 12:07:05 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  kind-control-plane\n  AcquireTime:     <unset>\n  RenewTime:       Sun, 31 May 2020 12:37:15 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Sun, 31 May 2020 12:32:45 +0000   Sun, 31 May 2020 12:07:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Sun, 31 May 2020 12:32:45 +0000   Sun, 31 May 2020 12:07:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Sun, 31 May 2020 12:32:45 +0000   Sun, 31 May 2020 12:07:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Sun, 31 May 2020 12:32:45 +0000   Sun, 31 May 2020 12:07:44 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.3\n  Hostname:    kind-control-plane\nCapacity:\n  cpu:                8\n  ephemeral-storage:  253882800Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             53582972Ki\n  pods:               110\nAllocatable:\n  cpu:                8\n  ephemeral-storage:  253882800Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             53582972Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 73f82dd03df044e98ef1aeb6a39d6401\n  System UUID:                cfdd0c73-3bf8-455e-9550-4948004af30e\n  Boot ID:                    01bede56-b6da-4551-b605-0cf693263b5a\n  Kernel Version:             4.15.0-1044-gke\n  OS Image:                   Ubuntu 20.04 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.3.4-12-g1e902b2d\n  Kubelet Version:            v1.19.0-beta.0.313+46d08c89ab9f55\n  Kube-Proxy Version:         v1.19.0-beta.0.313+46d08c89ab9f55\nPodCIDR:                      10.244.0.0/24\nPodCIDRs:                     10.244.0.0/24\nNon-terminated Pods:          (7 in total)\n  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---\n  kube-system                 coredns-66bff467f8-g4w4s                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29m\n  kube-system                 etcd-kind-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         30m\n  kube-system                 kindnet-4s6wb                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29m\n  kube-system                 kube-apiserver-kind-control-plane             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30m\n  kube-system                 kube-controller-manager-kind-control-plane    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30m\n  kube-system                 kube-proxy-mtm9b                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m\n  kube-system                 kube-scheduler-kind-control-plane             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30m\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                750m (9%)   100m (1%)\n  memory             120Mi (0%)  220Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\n  hugepages-1Gi      0 (0%)      0 (0%)\n  hugepages-2Mi      0 (0%)      0 (0%)\nEvents:\n  Type     Reason                    Age   From                            Message\n  ----     ------                    ----  ----                            -------\n  Normal   Starting                  30m   kubelet, kind-control-plane     Starting kubelet.\n  Normal   NodeHasSufficientMemory   30m   kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasSufficientMemory\n  Normal   NodeHasNoDiskPressure     30m   kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID      30m   kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasSufficientPID\n  Warning  CheckLimitsForResolvConf  30m   kubelet, kind-control-plane     Resolv.conf file '/etc/resolv.conf' contains search line consisting of more than 3 domains!\n  Normal   NodeAllocatableEnforced   30m   kubelet, kind-control-plane     Updated Node Allocatable limit across pods\n  Normal   Starting                  29m   kube-proxy, kind-control-plane  Starting kube-proxy.\n  Normal   NodeReady                 29m   kubelet, kind-control-plane     Node kind-control-plane status is now: NodeReady\n"
May 31 12:37:21.155: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:40063 --kubeconfig=/root/.kube/kind-test-config describe namespace kubectl-9011'
May 31 12:37:21.567: INFO: stderr: ""
May 31 12:37:21.567: INFO: stdout: "Name:         kubectl-9011\nLabels:       e2e-framework=kubectl\n              e2e-run=e7111260-a50c-47e5-96ec-3b115bebfb7e\nAnnotations:  <none>\nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
May 31 12:37:21.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9011" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":292,"completed":99,"skipped":1560,"failed":0}

------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 33 lines ...

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
May 31 12:37:31.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
W0531 12:37:31.687596   11902 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
STEP: Destroying namespace "gc-9559" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":292,"completed":100,"skipped":1560,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 190 lines ...
May 31 12:37:46.755: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May 31 12:37:46.755: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
May 31 12:37:46.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4181" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":292,"completed":101,"skipped":1597,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 31 12:37:46.977: INFO: Waiting up to 5m0s for pod "downwardapi-volume-18f3c595-d9ee-46fe-a424-202fe5bbe12c" in namespace "projected-8048" to be "Succeeded or Failed"
May 31 12:37:47.035: INFO: Pod "downwardapi-volume-18f3c595-d9ee-46fe-a424-202fe5bbe12c": Phase="Pending", Reason="", readiness=false. Elapsed: 57.458086ms
May 31 12:37:49.056: INFO: Pod "downwardapi-volume-18f3c595-d9ee-46fe-a424-202fe5bbe12c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078156108s
May 31 12:37:51.070: INFO: Pod "downwardapi-volume-18f3c595-d9ee-46fe-a424-202fe5bbe12c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092603925s
STEP: Saw pod success
May 31 12:37:51.070: INFO: Pod "downwardapi-volume-18f3c595-d9ee-46fe-a424-202fe5bbe12c" satisfied condition "Succeeded or Failed"
May 31 12:37:51.080: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-18f3c595-d9ee-46fe-a424-202fe5bbe12c container client-container: <nil>
STEP: delete the pod
May 31 12:37:51.115: INFO: Waiting for pod downwardapi-volume-18f3c595-d9ee-46fe-a424-202fe5bbe12c to disappear
May 31 12:37:51.126: INFO: Pod downwardapi-volume-18f3c595-d9ee-46fe-a424-202fe5bbe12c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
May 31 12:37:51.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8048" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":292,"completed":102,"skipped":1628,"failed":0}
SSSS
------------------------------
[sig-network] Services 
  should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 65 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
May 31 12:38:55.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5144" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":292,"completed":103,"skipped":1632,"failed":0}
SSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 28 lines ...
May 31 12:39:19.851: INFO: >>> kubeConfig: /root/.kube/kind-test-config
May 31 12:39:20.134: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
May 31 12:39:20.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4844" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":104,"skipped":1638,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
May 31 12:39:20.244: INFO: Waiting up to 5m0s for pod "busybox-user-65534-9f3f919d-066e-4eae-ac25-bae8881742ff" in namespace "security-context-test-7517" to be "Succeeded or Failed"
May 31 12:39:20.254: INFO: Pod "busybox-user-65534-9f3f919d-066e-4eae-ac25-bae8881742ff": Phase="Pending", Reason="", readiness=false. Elapsed: 9.873082ms
May 31 12:39:22.260: INFO: Pod "busybox-user-65534-9f3f919d-066e-4eae-ac25-bae8881742ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016195234s
May 31 12:39:24.267: INFO: Pod "busybox-user-65534-9f3f919d-066e-4eae-ac25-bae8881742ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023204044s
May 31 12:39:24.267: INFO: Pod "busybox-user-65534-9f3f919d-066e-4eae-ac25-bae8881742ff" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
May 31 12:39:24.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7517" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":105,"skipped":1652,"failed":0}

------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
May 31 12:39:24.408: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-a24a4397-ef7f-4cb6-9981-1dc3c6c616f6" in namespace "security-context-test-9275" to be "Succeeded or Failed"
May 31 12:39:24.416: INFO: Pod "busybox-readonly-false-a24a4397-ef7f-4cb6-9981-1dc3c6c616f6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.686431ms
May 31 12:39:26.426: INFO: Pod "busybox-readonly-false-a24a4397-ef7f-4cb6-9981-1dc3c6c616f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018506153s
May 31 12:39:28.432: INFO: Pod "busybox-readonly-false-a24a4397-ef7f-4cb6-9981-1dc3c6c616f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023803185s
May 31 12:39:28.432: INFO: Pod "busybox-readonly-false-a24a4397-ef7f-4cb6-9981-1dc3c6c616f6" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
May 31 12:39:28.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9275" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":292,"completed":106,"skipped":1652,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
... skipping 9 lines ...
STEP: creating the pod
May 31 12:39:28.474: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
May 31 12:39:40.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5887" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":292,"completed":107,"skipped":1668,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 28 lines ...
May 31 12:41:02.552: INFO: Terminating ReplicationController wrapped-volume-race-41a72d16-fe62-4be1-9932-16fd23822911 pods took: 400.985158ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  test/e2e/framework/framework.go:175
May 31 12:41:15.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-80" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":292,"completed":108,"skipped":1670,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-847e2832-9381-4c1d-824d-93819b241b41
STEP: Creating a pod to test consume configMaps
May 31 12:41:15.124: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-25d90301-01b8-40c9-9f41-8f03758c28e5" in namespace "projected-7662" to be "Succeeded or Failed"
May 31 12:41:15.128: INFO: Pod "pod-projected-configmaps-25d90301-01b8-40c9-9f41-8f03758c28e5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.584476ms
May 31 12:41:17.135: INFO: Pod "pod-projected-configmaps-25d90301-01b8-40c9-9f41-8f03758c28e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01059224s
May 31 12:41:19.140: INFO: Pod "pod-projected-configmaps-25d90301-01b8-40c9-9f41-8f03758c28e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015353757s
STEP: Saw pod success
May 31 12:41:19.140: INFO: Pod "pod-projected-configmaps-25d90301-01b8-40c9-9f41-8f03758c28e5" satisfied condition "Succeeded or Failed"
May 31 12:41:19.147: INFO: Trying to get logs from node kind-worker2 pod pod-projected-configmaps-25d90301-01b8-40c9-9f41-8f03758c28e5 container projected-configmap-volume-test: <nil>
STEP: delete the pod
May 31 12:41:19.184: INFO: Waiting for pod pod-projected-configmaps-25d90301-01b8-40c9-9f41-8f03758c28e5 to disappear
May 31 12:41:19.188: INFO: Pod pod-projected-configmaps-25d90301-01b8-40c9-9f41-8f03758c28e5 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
May 31 12:41:19.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7662" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":109,"skipped":1675,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-84910c7e-9f10-4e46-82ee-3d97cee35e4a
STEP: Creating a pod to test consume secrets
May 31 12:41:19.290: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ffbe4e25-9053-451b-8148-3153c8f170f3" in namespace "projected-1614" to be "Succeeded or Failed"
May 31 12:41:19.294: INFO: Pod "pod-projected-secrets-ffbe4e25-9053-451b-8148-3153c8f170f3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.541653ms
May 31 12:41:21.298: INFO: Pod "pod-projected-secrets-ffbe4e25-9053-451b-8148-3153c8f170f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007833545s
May 31 12:41:23.311: INFO: Pod "pod-projected-secrets-ffbe4e25-9053-451b-8148-3153c8f170f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020602077s
STEP: Saw pod success
May 31 12:41:23.311: INFO: Pod "pod-projected-secrets-ffbe4e25-9053-451b-8148-3153c8f170f3" satisfied condition "Succeeded or Failed"
May 31 12:41:23.320: INFO: Trying to get logs from node kind-worker2 pod pod-projected-secrets-ffbe4e25-9053-451b-8148-3153c8f170f3 container projected-secret-volume-test: <nil>
STEP: delete the pod
May 31 12:41:23.377: INFO: Waiting for pod pod-projected-secrets-ffbe4e25-9053-451b-8148-3153c8f170f3 to disappear
May 31 12:41:23.383: INFO: Pod pod-projected-secrets-ffbe4e25-9053-451b-8148-3153c8f170f3 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
May 31 12:41:23.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1614" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":110,"skipped":1699,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-node] PodTemplates 
  should run the lifecycle of PodTemplates [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] PodTemplates
... skipping 5 lines ...
[It] should run the lifecycle of PodTemplates [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [sig-node] PodTemplates
  test/e2e/framework/framework.go:175
May 31 12:41:23.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-6555" for this suite.
•{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":292,"completed":111,"skipped":1715,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 14 lines ...
STEP: verifying the updated pod is in kubernetes
May 31 12:41:28.343: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
May 31 12:41:28.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2409" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":292,"completed":112,"skipped":1724,"failed":0}

------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 25 lines ...
  test/e2e/framework/framework.go:175
May 31 12:41:37.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4194" for this suite.
STEP: Destroying namespace "webhook-4194-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":292,"completed":113,"skipped":1724,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
... skipping 7 lines ...
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  test/e2e/framework/framework.go:175
May 31 12:41:37.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-3212" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":292,"completed":114,"skipped":1767,"failed":0}
SSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 8 lines ...
May 31 12:41:37.864: INFO: >>> kubeConfig: /root/.kube/kind-test-config
May 31 12:41:41.555: INFO: >>> kubeConfig: /root/.kube/kind-test-config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
May 31 12:41:55.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2897" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":292,"completed":115,"skipped":1771,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected combined
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-projected-all-test-volume-da1427b9-dae5-44cd-92db-dd5bef89870a
STEP: Creating secret with name secret-projected-all-test-volume-03942657-318f-4609-bc6e-2bf0ca76a5da
STEP: Creating a pod to test Check all projections for projected volume plugin
May 31 12:41:55.759: INFO: Waiting up to 5m0s for pod "projected-volume-8193721e-2d5b-473f-9184-e85fee7b6fc5" in namespace "projected-9287" to be "Succeeded or Failed"
May 31 12:41:55.770: INFO: Pod "projected-volume-8193721e-2d5b-473f-9184-e85fee7b6fc5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.970221ms
May 31 12:41:57.800: INFO: Pod "projected-volume-8193721e-2d5b-473f-9184-e85fee7b6fc5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040394872s
May 31 12:41:59.816: INFO: Pod "projected-volume-8193721e-2d5b-473f-9184-e85fee7b6fc5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056169741s
STEP: Saw pod success
May 31 12:41:59.816: INFO: Pod "projected-volume-8193721e-2d5b-473f-9184-e85fee7b6fc5" satisfied condition "Succeeded or Failed"
May 31 12:41:59.826: INFO: Trying to get logs from node kind-worker2 pod projected-volume-8193721e-2d5b-473f-9184-e85fee7b6fc5 container projected-all-volume-test: <nil>
STEP: delete the pod
May 31 12:41:59.899: INFO: Waiting for pod projected-volume-8193721e-2d5b-473f-9184-e85fee7b6fc5 to disappear
May 31 12:41:59.908: INFO: Pod projected-volume-8193721e-2d5b-473f-9184-e85fee7b6fc5 no longer exists
[AfterEach] [sig-storage] Projected combined
  test/e2e/framework/framework.go:175
May 31 12:41:59.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9287" for this suite.
•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":292,"completed":116,"skipped":1830,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 31 12:41:59.943: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename webhook
... skipping 6 lines ...
STEP: Wait for the deployment to be ready
May 31 12:42:00.775: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 31 12:42:02.795: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726525720, loc:(*time.Location)(0x8006d20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726525720, loc:(*time.Location)(0x8006d20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726525720, loc:(*time.Location)(0x8006d20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726525720, loc:(*time.Location)(0x8006d20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 31 12:42:05.822: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  test/e2e/framework/framework.go:597
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
May 31 12:42:05.887: INFO: Waiting for webhook configuration to be ready...
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
May 31 12:42:06.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5766" for this suite.
STEP: Destroying namespace "webhook-5766-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":292,"completed":117,"skipped":1842,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Lease
... skipping 5 lines ...
[It] lease API should be available [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Lease
  test/e2e/framework/framework.go:175
May 31 12:42:06.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-3500" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":292,"completed":118,"skipped":1856,"failed":0}

------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-6988e24e-16c2-4817-b910-d73ee1c4b604
STEP: Creating a pod to test consume configMaps
May 31 12:42:06.470: INFO: Waiting up to 5m0s for pod "pod-configmaps-c045c0a8-b900-4865-af86-f36e7e1155e6" in namespace "configmap-9154" to be "Succeeded or Failed"
May 31 12:42:06.477: INFO: Pod "pod-configmaps-c045c0a8-b900-4865-af86-f36e7e1155e6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.640534ms
May 31 12:42:08.488: INFO: Pod "pod-configmaps-c045c0a8-b900-4865-af86-f36e7e1155e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017618119s
May 31 12:42:10.499: INFO: Pod "pod-configmaps-c045c0a8-b900-4865-af86-f36e7e1155e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028834224s
STEP: Saw pod success
May 31 12:42:10.499: INFO: Pod "pod-configmaps-c045c0a8-b900-4865-af86-f36e7e1155e6" satisfied condition "Succeeded or Failed"
May 31 12:42:10.510: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-c045c0a8-b900-4865-af86-f36e7e1155e6 container configmap-volume-test: <nil>
STEP: delete the pod
May 31 12:42:10.560: INFO: Waiting for pod pod-configmaps-c045c0a8-b900-4865-af86-f36e7e1155e6 to disappear
May 31 12:42:10.574: INFO: Pod pod-configmaps-c045c0a8-b900-4865-af86-f36e7e1155e6 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
May 31 12:42:10.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9154" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":292,"completed":119,"skipped":1856,"failed":0}
SS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 34 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
May 31 12:42:18.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2823" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":292,"completed":120,"skipped":1858,"failed":0}
SS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
May 31 12:42:18.870: INFO: >>> kubeConfig: /root/.kube/kind-test-config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
May 31 12:42:19.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7652" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":292,"completed":121,"skipped":1860,"failed":0}

------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 28 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
May 31 12:42:31.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2524" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":292,"completed":122,"skipped":1860,"failed":0}
SSS
------------------------------
[sig-network] Services 
  should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 65 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
May 31 12:42:55.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1547" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":292,"completed":123,"skipped":1863,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  test/e2e/framework/framework.go:175
May 31 12:43:02.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1671" for this suite.
STEP: Destroying namespace "webhook-1671-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":292,"completed":124,"skipped":1900,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 30 lines ...
May 31 12:43:08.775: INFO: Selector matched 1 pods for map[app:agnhost]
May 31 12:43:08.775: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
May 31 12:43:08.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9459" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":292,"completed":125,"skipped":1942,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
May 31 12:43:16.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-7044" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/crd_conversion_webhook.go:137
•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":292,"completed":126,"skipped":1956,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-9170a089-8f57-4453-997b-3edf40a49b29
STEP: Creating a pod to test consume configMaps
May 31 12:43:16.311: INFO: Waiting up to 5m0s for pod "pod-configmaps-6f42516c-ed20-46d2-9374-a167dc08bd4b" in namespace "configmap-8676" to be "Succeeded or Failed"
May 31 12:43:16.361: INFO: Pod "pod-configmaps-6f42516c-ed20-46d2-9374-a167dc08bd4b": Phase="Pending", Reason="", readiness=false. Elapsed: 49.960292ms
May 31 12:43:18.391: INFO: Pod "pod-configmaps-6f42516c-ed20-46d2-9374-a167dc08bd4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080724935s
May 31 12:43:20.403: INFO: Pod "pod-configmaps-6f42516c-ed20-46d2-9374-a167dc08bd4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092664948s
STEP: Saw pod success
May 31 12:43:20.404: INFO: Pod "pod-configmaps-6f42516c-ed20-46d2-9374-a167dc08bd4b" satisfied condition "Succeeded or Failed"
May 31 12:43:20.411: INFO: Trying to get logs from node kind-worker pod pod-configmaps-6f42516c-ed20-46d2-9374-a167dc08bd4b container configmap-volume-test: <nil>
STEP: delete the pod
May 31 12:43:20.496: INFO: Waiting for pod pod-configmaps-6f42516c-ed20-46d2-9374-a167dc08bd4b to disappear
May 31 12:43:20.504: INFO: Pod pod-configmaps-6f42516c-ed20-46d2-9374-a167dc08bd4b no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
May 31 12:43:20.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8676" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":292,"completed":127,"skipped":1971,"failed":0}
SSS
------------------------------
[k8s.io] Variable Expansion 
  should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 19 lines ...
May 31 12:43:25.716: INFO: Deleting pod "var-expansion-08a3a448-818f-4085-a234-f1996077b232" in namespace "var-expansion-6315"
May 31 12:43:25.726: INFO: Wait up to 5m0s for pod "var-expansion-08a3a448-818f-4085-a234-f1996077b232" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
May 31 12:44:05.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6315" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":292,"completed":128,"skipped":1974,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 24 lines ...
  test/e2e/framework/framework.go:175
May 31 12:44:16.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1725" for this suite.
STEP: Destroying namespace "webhook-1725-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":292,"completed":129,"skipped":1977,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 31 12:44:16.447: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a6eabdef-85f5-4c56-b721-65a0ab1715db" in namespace "projected-9419" to be "Succeeded or Failed"
May 31 12:44:16.455: INFO: Pod "downwardapi-volume-a6eabdef-85f5-4c56-b721-65a0ab1715db": Phase="Pending", Reason="", readiness=false. Elapsed: 7.498138ms
May 31 12:44:18.467: INFO: Pod "downwardapi-volume-a6eabdef-85f5-4c56-b721-65a0ab1715db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019429059s
May 31 12:44:20.478: INFO: Pod "downwardapi-volume-a6eabdef-85f5-4c56-b721-65a0ab1715db": Phase="Running", Reason="", readiness=true. Elapsed: 4.030549823s
May 31 12:44:22.492: INFO: Pod "downwardapi-volume-a6eabdef-85f5-4c56-b721-65a0ab1715db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.044410912s
STEP: Saw pod success
May 31 12:44:22.492: INFO: Pod "downwardapi-volume-a6eabdef-85f5-4c56-b721-65a0ab1715db" satisfied condition "Succeeded or Failed"
May 31 12:44:22.501: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-a6eabdef-85f5-4c56-b721-65a0ab1715db container client-container: <nil>
STEP: delete the pod
May 31 12:44:22.555: INFO: Waiting for pod downwardapi-volume-a6eabdef-85f5-4c56-b721-65a0ab1715db to disappear
May 31 12:44:22.572: INFO: Pod downwardapi-volume-a6eabdef-85f5-4c56-b721-65a0ab1715db no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
May 31 12:44:22.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9419" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":292,"completed":130,"skipped":1994,"failed":0}

------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
May 31 12:44:22.598: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
May 31 12:44:22.678: INFO: Waiting up to 5m0s for pod "downward-api-4ea719cc-7725-474f-9fb1-3a421f97cb3c" in namespace "downward-api-1676" to be "Succeeded or Failed"
May 31 12:44:22.690: INFO: Pod "downward-api-4ea719cc-7725-474f-9fb1-3a421f97cb3c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.800814ms
May 31 12:44:24.700: INFO: Pod "downward-api-4ea719cc-7725-474f-9fb1-3a421f97cb3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022710565s
May 31 12:44:26.706: INFO: Pod "downward-api-4ea719cc-7725-474f-9fb1-3a421f97cb3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028041563s
STEP: Saw pod success
May 31 12:44:26.706: INFO: Pod "downward-api-4ea719cc-7725-474f-9fb1-3a421f97cb3c" satisfied condition "Succeeded or Failed"
May 31 12:44:26.710: INFO: Trying to get logs from node kind-worker2 pod downward-api-4ea719cc-7725-474f-9fb1-3a421f97cb3c container dapi-container: <nil>
STEP: delete the pod
May 31 12:44:26.750: INFO: Waiting for pod downward-api-4ea719cc-7725-474f-9fb1-3a421f97cb3c to disappear
May 31 12:44:26.760: INFO: Pod downward-api-4ea719cc-7725-474f-9fb1-3a421f97cb3c no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
May 31 12:44:26.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1676" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":292,"completed":131,"skipped":1994,"failed":0}
S
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
May 31 12:44:30.904: INFO: Initial restart count of pod busybox-0631b1fe-1567-4f41-8f40-3f86f99c4fb6 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
May 31 12:48:31.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9049" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":292,"completed":132,"skipped":1995,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] version v1
... skipping 342 lines ...
May 31 12:48:41.797: INFO: Deleting ReplicationController proxy-service-5dppt took: 8.64177ms
May 31 12:48:42.098: INFO: Terminating ReplicationController proxy-service-5dppt pods took: 301.50977ms
[AfterEach] version v1
  test/e2e/framework/framework.go:175
May 31 12:48:55.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-1004" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":292,"completed":133,"skipped":2010,"failed":0}
SS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
May 31 12:48:55.333: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
May 31 12:48:55.432: INFO: Waiting up to 5m0s for pod "downward-api-8c418e61-8415-4be1-aaa3-7f3b2bc667f9" in namespace "downward-api-9267" to be "Succeeded or Failed"
May 31 12:48:55.444: INFO: Pod "downward-api-8c418e61-8415-4be1-aaa3-7f3b2bc667f9": Phase="Pending", Reason="", readiness=false. Elapsed: 11.539031ms
May 31 12:48:57.453: INFO: Pod "downward-api-8c418e61-8415-4be1-aaa3-7f3b2bc667f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020324406s
May 31 12:48:59.467: INFO: Pod "downward-api-8c418e61-8415-4be1-aaa3-7f3b2bc667f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034645956s
STEP: Saw pod success
May 31 12:48:59.467: INFO: Pod "downward-api-8c418e61-8415-4be1-aaa3-7f3b2bc667f9" satisfied condition "Succeeded or Failed"
May 31 12:48:59.474: INFO: Trying to get logs from node kind-worker2 pod downward-api-8c418e61-8415-4be1-aaa3-7f3b2bc667f9 container dapi-container: <nil>
STEP: delete the pod
May 31 12:48:59.561: INFO: Waiting for pod downward-api-8c418e61-8415-4be1-aaa3-7f3b2bc667f9 to disappear
May 31 12:48:59.575: INFO: Pod downward-api-8c418e61-8415-4be1-aaa3-7f3b2bc667f9 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
May 31 12:48:59.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9267" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":292,"completed":134,"skipped":2012,"failed":0}
SSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 25 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
May 31 12:49:24.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1881" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":292,"completed":135,"skipped":2017,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-3b1c491e-5469-46c9-ad6a-32a1bd323da2
STEP: Creating a pod to test consume secrets
May 31 12:49:24.736: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-77196c78-3b06-4da9-a996-9a6c76bcc2e4" in namespace "projected-2634" to be "Succeeded or Failed"
May 31 12:49:24.742: INFO: Pod "pod-projected-secrets-77196c78-3b06-4da9-a996-9a6c76bcc2e4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.746725ms
May 31 12:49:26.751: INFO: Pod "pod-projected-secrets-77196c78-3b06-4da9-a996-9a6c76bcc2e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01513958s
May 31 12:49:28.759: INFO: Pod "pod-projected-secrets-77196c78-3b06-4da9-a996-9a6c76bcc2e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023232307s
STEP: Saw pod success
May 31 12:49:28.760: INFO: Pod "pod-projected-secrets-77196c78-3b06-4da9-a996-9a6c76bcc2e4" satisfied condition "Succeeded or Failed"
May 31 12:49:28.773: INFO: Trying to get logs from node kind-worker2 pod pod-projected-secrets-77196c78-3b06-4da9-a996-9a6c76bcc2e4 container projected-secret-volume-test: <nil>
STEP: delete the pod
May 31 12:49:28.799: INFO: Waiting for pod pod-projected-secrets-77196c78-3b06-4da9-a996-9a6c76bcc2e4 to disappear
May 31 12:49:28.809: INFO: Pod pod-projected-secrets-77196c78-3b06-4da9-a996-9a6c76bcc2e4 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
May 31 12:49:28.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2634" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":136,"skipped":2049,"failed":0}

------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap configmap-3564/configmap-test-7ef21d5c-5d6d-4a53-97d2-1d567a584b15
STEP: Creating a pod to test consume configMaps
May 31 12:49:28.936: INFO: Waiting up to 5m0s for pod "pod-configmaps-c97dab5c-15ec-4fb6-99a1-7206287d0809" in namespace "configmap-3564" to be "Succeeded or Failed"
May 31 12:49:28.948: INFO: Pod "pod-configmaps-c97dab5c-15ec-4fb6-99a1-7206287d0809": Phase="Pending", Reason="", readiness=false. Elapsed: 11.326972ms
May 31 12:49:30.960: INFO: Pod "pod-configmaps-c97dab5c-15ec-4fb6-99a1-7206287d0809": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023705497s
May 31 12:49:32.970: INFO: Pod "pod-configmaps-c97dab5c-15ec-4fb6-99a1-7206287d0809": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034016308s
STEP: Saw pod success
May 31 12:49:32.971: INFO: Pod "pod-configmaps-c97dab5c-15ec-4fb6-99a1-7206287d0809" satisfied condition "Succeeded or Failed"
May 31 12:49:32.983: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-c97dab5c-15ec-4fb6-99a1-7206287d0809 container env-test: <nil>
STEP: delete the pod
May 31 12:49:33.047: INFO: Waiting for pod pod-configmaps-c97dab5c-15ec-4fb6-99a1-7206287d0809 to disappear
May 31 12:49:33.059: INFO: Pod pod-configmaps-c97dab5c-15ec-4fb6-99a1-7206287d0809 no longer exists
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:175
May 31 12:49:33.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3564" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":292,"completed":137,"skipped":2049,"failed":0}
S
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should patch a Namespace [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 9 lines ...
STEP: get the Namespace and ensuring it has the label
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  test/e2e/framework/framework.go:175
May 31 12:49:33.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-8140" for this suite.
STEP: Destroying namespace "nspatchtest-e4b4c1bc-4378-4edd-ab93-0ff980396837-4473" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":292,"completed":138,"skipped":2050,"failed":0}
SS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 44 lines ...
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-65d87 webserver-deployment-6676bcd6d4- deployment-5461 /api/v1/namespaces/deployment-5461/pods/webserver-deployment-6676bcd6d4-65d87 3f2d692e-8a43-45bb-9384-690c5fd6284d 15297 0 2020-05-31 12:49:41 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 e03851ed-9213-41f6-8ad4-9953cde54db8 0xc003154750 0xc003154751}] []  [{kube-controller-manager Update v1 2020-05-31 12:49:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e03851ed-9213-41f6-8ad4-9953cde54db8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-31 12:49:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qbfmp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qbfmp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qbfmp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 12:49:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 12:49:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 12:49:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 12:49:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2020-05-31 12:49:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 31 12:49:43.747: INFO: Pod "webserver-deployment-6676bcd6d4-bhp7k" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-bhp7k webserver-deployment-6676bcd6d4- deployment-5461 /api/v1/namespaces/deployment-5461/pods/webserver-deployment-6676bcd6d4-bhp7k 2fdbbe22-3ee7-44cd-9d30-b1a8650184f9 15287 0 2020-05-31 12:49:41 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 e03851ed-9213-41f6-8ad4-9953cde54db8 0xc0031548f0 0xc0031548f1}] []  [{kube-controller-manager Update v1 2020-05-31 12:49:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e03851ed-9213-41f6-8ad4-9953cde54db8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-31 12:49:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qbfmp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qbfmp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qbfmp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 12:49:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 12:49:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 12:49:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 12:49:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2020-05-31 12:49:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 31 12:49:43.747: INFO: Pod "webserver-deployment-6676bcd6d4-cp4mt" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-cp4mt webserver-deployment-6676bcd6d4- deployment-5461 /api/v1/namespaces/deployment-5461/pods/webserver-deployment-6676bcd6d4-cp4mt 9345a4f0-4aca-496a-848e-17b16f1fcee9 15298 0 2020-05-31 12:49:41 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 e03851ed-9213-41f6-8ad4-9953cde54db8 0xc003154a90 0xc003154a91}] []  [{kube-controller-manager Update v1 2020-05-31 12:49:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e03851ed-9213-41f6-8ad4-9953cde54db8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-31 12:49:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qbfmp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qbfmp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qbfmp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 12:49:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 12:49:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 12:49:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 12:49:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-05-31 12:49:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 31 12:49:43.747: INFO: Pod "webserver-deployment-6676bcd6d4-gbtsz" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-gbtsz webserver-deployment-6676bcd6d4- deployment-5461 /api/v1/namespaces/deployment-5461/pods/webserver-deployment-6676bcd6d4-gbtsz 50c899f8-67ad-45b0-8f89-c45a68c074c3 15299 0 2020-05-31 12:49:39 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 e03851ed-9213-41f6-8ad4-9953cde54db8 0xc003154c30 0xc003154c31}] []  [{kube-controller-manager Update v1 2020-05-31 12:49:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e03851ed-9213-41f6-8ad4-9953cde54db8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-31 12:49:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.152\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qbfmp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qbfmp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qbfmp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 12:49:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 12:49:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 12:49:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 12:49:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.152,StartTime:2020-05-31 12:49:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.152,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 31 12:49:43.748: INFO: Pod "webserver-deployment-6676bcd6d4-hcfvd" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-hcfvd webserver-deployment-6676bcd6d4- deployment-5461 /api/v1/namespaces/deployment-5461/pods/webserver-deployment-6676bcd6d4-hcfvd 331d1dac-0a10-4ad0-97e6-e8de8bf8094d 15253 0 2020-05-31 12:49:41 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 e03851ed-9213-41f6-8ad4-9953cde54db8 0xc003154e00 0xc003154e01}] []  [{kube-controller-manager Update v1 2020-05-31 12:49:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e03851ed-9213-41f6-8ad4-9953cde54db8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-31 12:49:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qbfmp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qbfmp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qbfmp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 12:49:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 12:49:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 12:49:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 12:49:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-05-31 12:49:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 31 12:49:43.748: INFO: Pod "webserver-deployment-6676bcd6d4-jkfqw" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-jkfqw webserver-deployment-6676bcd6d4- deployment-5461 /api/v1/namespaces/deployment-5461/pods/webserver-deployment-6676bcd6d4-jkfqw b09364d5-3920-4f8a-9ea8-a3b38a330f16 15132 0 2020-05-31 12:49:39 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 e03851ed-9213-41f6-8ad4-9953cde54db8 0xc003154fc0 0xc003154fc1}] []  [{kube-controller-manager Update v1 2020-05-31 12:49:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e03851ed-9213-41f6-8ad4-9953cde54db8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-31 12:49:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qbfmp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qbfmp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qbfmp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 12:49:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 12:49:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 12:49:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 12:49:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2020-05-31 12:49:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 31 12:49:43.748: INFO: Pod "webserver-deployment-6676bcd6d4-kdwzf" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-kdwzf webserver-deployment-6676bcd6d4- deployment-5461 /api/v1/namespaces/deployment-5461/pods/webserver-deployment-6676bcd6d4-kdwzf 0660a946-f2a4-4285-a79a-e524535cf41b 15150 0 2020-05-31 12:49:39 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 e03851ed-9213-41f6-8ad4-9953cde54db8 0xc003155180 0xc003155181}] []  [{kube-controller-manager Update v1 2020-05-31 12:49:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e03851ed-9213-41f6-8ad4-9953cde54db8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-31 12:49:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qbfmp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qbfmp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qbfmp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 12:49:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 12:49:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 12:49:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 12:49:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2020-05-31 12:49:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
... skipping 46 lines ...
May 31 12:49:43.780: INFO: Pod "webserver-deployment-84855cf797-wvqr4" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-wvqr4 webserver-deployment-84855cf797- deployment-5461 /api/v1/namespaces/deployment-5461/pods/webserver-deployment-84855cf797-wvqr4 b6fb2ca6-9194-46f8-8d59-5b81349919e8 15286 0 2020-05-31 12:49:41 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 51eb52e7-dd9a-45d5-aea6-97c15cc6cb8e 0xc0030f3ae0 0xc0030f3ae1}] []  [{kube-controller-manager Update v1 2020-05-31 12:49:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"51eb52e7-dd9a-45d5-aea6-97c15cc6cb8e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-31 12:49:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qbfmp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qbfmp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qbfmp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 12:49:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 12:49:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 12:49:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 12:49:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-05-31 12:49:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
May 31 12:49:43.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5461" for this suite.
•{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":292,"completed":139,"skipped":2052,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 5 lines ...
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
May 31 12:49:52.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8945" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":292,"completed":140,"skipped":2065,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-677d1a77-2508-40c7-b3d8-5f5630727bae
STEP: Creating a pod to test consume secrets
May 31 12:49:52.351: INFO: Waiting up to 5m0s for pod "pod-secrets-62818e72-6ec4-465e-9cf8-3afaba72231f" in namespace "secrets-2590" to be "Succeeded or Failed"
May 31 12:49:52.362: INFO: Pod "pod-secrets-62818e72-6ec4-465e-9cf8-3afaba72231f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.479852ms
May 31 12:49:54.372: INFO: Pod "pod-secrets-62818e72-6ec4-465e-9cf8-3afaba72231f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02108554s
May 31 12:49:56.389: INFO: Pod "pod-secrets-62818e72-6ec4-465e-9cf8-3afaba72231f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037937436s
May 31 12:49:58.403: INFO: Pod "pod-secrets-62818e72-6ec4-465e-9cf8-3afaba72231f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.052060807s
STEP: Saw pod success
May 31 12:49:58.403: INFO: Pod "pod-secrets-62818e72-6ec4-465e-9cf8-3afaba72231f" satisfied condition "Succeeded or Failed"
May 31 12:49:58.407: INFO: Trying to get logs from node kind-worker2 pod pod-secrets-62818e72-6ec4-465e-9cf8-3afaba72231f container secret-volume-test: <nil>
STEP: delete the pod
May 31 12:49:58.458: INFO: Waiting for pod pod-secrets-62818e72-6ec4-465e-9cf8-3afaba72231f to disappear
May 31 12:49:58.468: INFO: Pod pod-secrets-62818e72-6ec4-465e-9cf8-3afaba72231f no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
May 31 12:49:58.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2590" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":141,"skipped":2073,"failed":0}
SS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 38 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
May 31 12:50:14.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3786" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":292,"completed":142,"skipped":2075,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 17 lines ...
May 31 12:52:47.491: INFO: Restart count of pod container-probe-9427/liveness-12bd7f50-93f7-4413-9852-5ca7091a19c3 is now 5 (2m28.591474802s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
May 31 12:52:47.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9427" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":292,"completed":143,"skipped":2096,"failed":0}
SSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 9 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
May 31 12:52:47.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7944" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":292,"completed":144,"skipped":2100,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
May 31 12:52:53.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8940" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":292,"completed":145,"skipped":2115,"failed":0}

------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-configmap-c8fw
STEP: Creating a pod to test atomic-volume-subpath
May 31 12:52:53.928: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-c8fw" in namespace "subpath-9713" to be "Succeeded or Failed"
May 31 12:52:53.931: INFO: Pod "pod-subpath-test-configmap-c8fw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.744695ms
May 31 12:52:55.946: INFO: Pod "pod-subpath-test-configmap-c8fw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01747806s
May 31 12:52:57.954: INFO: Pod "pod-subpath-test-configmap-c8fw": Phase="Running", Reason="", readiness=true. Elapsed: 4.025793273s
May 31 12:52:59.974: INFO: Pod "pod-subpath-test-configmap-c8fw": Phase="Running", Reason="", readiness=true. Elapsed: 6.04557715s
May 31 12:53:01.980: INFO: Pod "pod-subpath-test-configmap-c8fw": Phase="Running", Reason="", readiness=true. Elapsed: 8.051442017s
May 31 12:53:03.984: INFO: Pod "pod-subpath-test-configmap-c8fw": Phase="Running", Reason="", readiness=true. Elapsed: 10.055431464s
... skipping 2 lines ...
May 31 12:53:10.007: INFO: Pod "pod-subpath-test-configmap-c8fw": Phase="Running", Reason="", readiness=true. Elapsed: 16.078315285s
May 31 12:53:12.011: INFO: Pod "pod-subpath-test-configmap-c8fw": Phase="Running", Reason="", readiness=true. Elapsed: 18.083100998s
May 31 12:53:14.016: INFO: Pod "pod-subpath-test-configmap-c8fw": Phase="Running", Reason="", readiness=true. Elapsed: 20.087344475s
May 31 12:53:16.020: INFO: Pod "pod-subpath-test-configmap-c8fw": Phase="Running", Reason="", readiness=true. Elapsed: 22.092031412s
May 31 12:53:18.029: INFO: Pod "pod-subpath-test-configmap-c8fw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.100320491s
STEP: Saw pod success
May 31 12:53:18.029: INFO: Pod "pod-subpath-test-configmap-c8fw" satisfied condition "Succeeded or Failed"
May 31 12:53:18.036: INFO: Trying to get logs from node kind-worker2 pod pod-subpath-test-configmap-c8fw container test-container-subpath-configmap-c8fw: <nil>
STEP: delete the pod
May 31 12:53:18.090: INFO: Waiting for pod pod-subpath-test-configmap-c8fw to disappear
May 31 12:53:18.096: INFO: Pod pod-subpath-test-configmap-c8fw no longer exists
STEP: Deleting pod pod-subpath-test-configmap-c8fw
May 31 12:53:18.096: INFO: Deleting pod "pod-subpath-test-configmap-c8fw" in namespace "subpath-9713"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
May 31 12:53:18.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9713" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":292,"completed":146,"skipped":2115,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 11 lines ...
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
May 31 12:53:38.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7558" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":292,"completed":147,"skipped":2137,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-7c92e286-42b1-49d9-adad-9debddd5d037
STEP: Creating a pod to test consume configMaps
May 31 12:53:38.216: INFO: Waiting up to 5m0s for pod "pod-configmaps-31487de7-51aa-4158-b3ee-136cfd9cf764" in namespace "configmap-2617" to be "Succeeded or Failed"
May 31 12:53:38.220: INFO: Pod "pod-configmaps-31487de7-51aa-4158-b3ee-136cfd9cf764": Phase="Pending", Reason="", readiness=false. Elapsed: 3.773306ms
May 31 12:53:40.238: INFO: Pod "pod-configmaps-31487de7-51aa-4158-b3ee-136cfd9cf764": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022312504s
May 31 12:53:42.248: INFO: Pod "pod-configmaps-31487de7-51aa-4158-b3ee-136cfd9cf764": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031911881s
STEP: Saw pod success
May 31 12:53:42.248: INFO: Pod "pod-configmaps-31487de7-51aa-4158-b3ee-136cfd9cf764" satisfied condition "Succeeded or Failed"
May 31 12:53:42.256: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-31487de7-51aa-4158-b3ee-136cfd9cf764 container configmap-volume-test: <nil>
STEP: delete the pod
May 31 12:53:42.300: INFO: Waiting for pod pod-configmaps-31487de7-51aa-4158-b3ee-136cfd9cf764 to disappear
May 31 12:53:42.319: INFO: Pod pod-configmaps-31487de7-51aa-4158-b3ee-136cfd9cf764 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
May 31 12:53:42.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2617" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":292,"completed":148,"skipped":2162,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 149 lines ...
May 31 12:54:20.860: INFO: stderr: ""
May 31 12:54:20.860: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
May 31 12:54:20.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8192" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":292,"completed":149,"skipped":2174,"failed":0}
SSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-5369aa21-5821-4ba7-8397-47c591589f3d
STEP: Creating a pod to test consume secrets
May 31 12:54:21.042: INFO: Waiting up to 5m0s for pod "pod-secrets-717e0902-f9f1-4a12-af27-88d5ce15dde1" in namespace "secrets-1748" to be "Succeeded or Failed"
May 31 12:54:21.068: INFO: Pod "pod-secrets-717e0902-f9f1-4a12-af27-88d5ce15dde1": Phase="Pending", Reason="", readiness=false. Elapsed: 26.264071ms
May 31 12:54:23.076: INFO: Pod "pod-secrets-717e0902-f9f1-4a12-af27-88d5ce15dde1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034271343s
May 31 12:54:25.090: INFO: Pod "pod-secrets-717e0902-f9f1-4a12-af27-88d5ce15dde1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047749987s
STEP: Saw pod success
May 31 12:54:25.090: INFO: Pod "pod-secrets-717e0902-f9f1-4a12-af27-88d5ce15dde1" satisfied condition "Succeeded or Failed"
May 31 12:54:25.099: INFO: Trying to get logs from node kind-worker2 pod pod-secrets-717e0902-f9f1-4a12-af27-88d5ce15dde1 container secret-volume-test: <nil>
STEP: delete the pod
May 31 12:54:25.161: INFO: Waiting for pod pod-secrets-717e0902-f9f1-4a12-af27-88d5ce15dde1 to disappear
May 31 12:54:25.175: INFO: Pod pod-secrets-717e0902-f9f1-4a12-af27-88d5ce15dde1 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
May 31 12:54:25.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1748" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":292,"completed":150,"skipped":2179,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 22 lines ...
May 31 12:54:55.439: INFO: Waiting for statefulset status.replicas updated to 0
May 31 12:54:55.450: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
May 31 12:54:55.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3674" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":292,"completed":151,"skipped":2206,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-739b9d02-fe60-4c2c-bbec-3f0a9cacc3f2
STEP: Creating a pod to test consume secrets
May 31 12:54:55.556: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ff05e85b-0ee9-4b1f-8b64-732b6e66500a" in namespace "projected-5863" to be "Succeeded or Failed"
May 31 12:54:55.560: INFO: Pod "pod-projected-secrets-ff05e85b-0ee9-4b1f-8b64-732b6e66500a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.134981ms
May 31 12:54:57.579: INFO: Pod "pod-projected-secrets-ff05e85b-0ee9-4b1f-8b64-732b6e66500a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022647337s
May 31 12:54:59.586: INFO: Pod "pod-projected-secrets-ff05e85b-0ee9-4b1f-8b64-732b6e66500a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029972031s
STEP: Saw pod success
May 31 12:54:59.587: INFO: Pod "pod-projected-secrets-ff05e85b-0ee9-4b1f-8b64-732b6e66500a" satisfied condition "Succeeded or Failed"
May 31 12:54:59.600: INFO: Trying to get logs from node kind-worker2 pod pod-projected-secrets-ff05e85b-0ee9-4b1f-8b64-732b6e66500a container projected-secret-volume-test: <nil>
STEP: delete the pod
May 31 12:54:59.664: INFO: Waiting for pod pod-projected-secrets-ff05e85b-0ee9-4b1f-8b64-732b6e66500a to disappear
May 31 12:54:59.683: INFO: Pod pod-projected-secrets-ff05e85b-0ee9-4b1f-8b64-732b6e66500a no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
May 31 12:54:59.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5863" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":292,"completed":152,"skipped":2240,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Service endpoints latency
... skipping 419 lines ...
May 31 12:55:13.649: INFO: 99 %ile: 1.348016107s
May 31 12:55:13.649: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  test/e2e/framework/framework.go:175
May 31 12:55:13.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-9299" for this suite.
•{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":292,"completed":153,"skipped":2253,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
May 31 12:55:13.696: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test override arguments
May 31 12:55:13.803: INFO: Waiting up to 5m0s for pod "client-containers-68989808-174e-49b4-a838-cde421fc8030" in namespace "containers-528" to be "Succeeded or Failed"
May 31 12:55:13.815: INFO: Pod "client-containers-68989808-174e-49b4-a838-cde421fc8030": Phase="Pending", Reason="", readiness=false. Elapsed: 11.605914ms
May 31 12:55:15.835: INFO: Pod "client-containers-68989808-174e-49b4-a838-cde421fc8030": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032254497s
May 31 12:55:17.845: INFO: Pod "client-containers-68989808-174e-49b4-a838-cde421fc8030": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04135998s
STEP: Saw pod success
May 31 12:55:17.846: INFO: Pod "client-containers-68989808-174e-49b4-a838-cde421fc8030" satisfied condition "Succeeded or Failed"
May 31 12:55:17.862: INFO: Trying to get logs from node kind-worker2 pod client-containers-68989808-174e-49b4-a838-cde421fc8030 container test-container: <nil>
STEP: delete the pod
May 31 12:55:17.910: INFO: Waiting for pod client-containers-68989808-174e-49b4-a838-cde421fc8030 to disappear
May 31 12:55:17.922: INFO: Pod client-containers-68989808-174e-49b4-a838-cde421fc8030 no longer exists
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
May 31 12:55:17.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-528" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":292,"completed":154,"skipped":2268,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 19 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
May 31 12:55:34.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4528" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":292,"completed":155,"skipped":2343,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 8 lines ...
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  test/e2e/framework/framework.go:175
May 31 12:55:38.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-2516" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":292,"completed":156,"skipped":2345,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-3c59148b-ee58-4d4d-b5ed-5609d0098962
STEP: Creating a pod to test consume configMaps
May 31 12:55:38.812: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e325871f-54c9-440b-b92a-391771a99e45" in namespace "projected-8638" to be "Succeeded or Failed"
May 31 12:55:38.823: INFO: Pod "pod-projected-configmaps-e325871f-54c9-440b-b92a-391771a99e45": Phase="Pending", Reason="", readiness=false. Elapsed: 11.310805ms
May 31 12:55:40.836: INFO: Pod "pod-projected-configmaps-e325871f-54c9-440b-b92a-391771a99e45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023768812s
May 31 12:55:42.850: INFO: Pod "pod-projected-configmaps-e325871f-54c9-440b-b92a-391771a99e45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038047711s
STEP: Saw pod success
May 31 12:55:42.850: INFO: Pod "pod-projected-configmaps-e325871f-54c9-440b-b92a-391771a99e45" satisfied condition "Succeeded or Failed"
May 31 12:55:42.866: INFO: Trying to get logs from node kind-worker2 pod pod-projected-configmaps-e325871f-54c9-440b-b92a-391771a99e45 container projected-configmap-volume-test: <nil>
STEP: delete the pod
May 31 12:55:42.921: INFO: Waiting for pod pod-projected-configmaps-e325871f-54c9-440b-b92a-391771a99e45 to disappear
May 31 12:55:42.939: INFO: Pod pod-projected-configmaps-e325871f-54c9-440b-b92a-391771a99e45 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
May 31 12:55:42.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8638" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":292,"completed":157,"skipped":2356,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 28 lines ...
May 31 12:56:07.575: INFO: >>> kubeConfig: /root/.kube/kind-test-config
May 31 12:56:07.875: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
May 31 12:56:07.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9171" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":292,"completed":158,"skipped":2367,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 48 lines ...
May 31 12:58:18.503: INFO: Waiting for statefulset status.replicas updated to 0
May 31 12:58:18.507: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
May 31 12:58:18.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3839" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":292,"completed":159,"skipped":2376,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 78 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
May 31 12:59:15.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9829" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":292,"completed":160,"skipped":2388,"failed":0}
SSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 16 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
May 31 12:59:28.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6033" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":292,"completed":161,"skipped":2391,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 31 12:59:28.712: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9a9d4a00-843a-455f-b3c7-bf693f3d319e" in namespace "projected-1725" to be "Succeeded or Failed"
May 31 12:59:28.724: INFO: Pod "downwardapi-volume-9a9d4a00-843a-455f-b3c7-bf693f3d319e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.285713ms
May 31 12:59:30.728: INFO: Pod "downwardapi-volume-9a9d4a00-843a-455f-b3c7-bf693f3d319e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015200571s
May 31 12:59:32.735: INFO: Pod "downwardapi-volume-9a9d4a00-843a-455f-b3c7-bf693f3d319e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022780174s
STEP: Saw pod success
May 31 12:59:32.735: INFO: Pod "downwardapi-volume-9a9d4a00-843a-455f-b3c7-bf693f3d319e" satisfied condition "Succeeded or Failed"
May 31 12:59:32.743: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-9a9d4a00-843a-455f-b3c7-bf693f3d319e container client-container: <nil>
STEP: delete the pod
May 31 12:59:32.804: INFO: Waiting for pod downwardapi-volume-9a9d4a00-843a-455f-b3c7-bf693f3d319e to disappear
May 31 12:59:32.808: INFO: Pod downwardapi-volume-9a9d4a00-843a-455f-b3c7-bf693f3d319e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
May 31 12:59:32.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1725" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":292,"completed":162,"skipped":2400,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 9 lines ...
[It] should have an terminated reason [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
May 31 12:59:36.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7550" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":292,"completed":163,"skipped":2419,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
May 31 12:59:36.896: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on tmpfs
May 31 12:59:36.947: INFO: Waiting up to 5m0s for pod "pod-30063d6f-7629-4f2b-a7a0-cfe4203c80d9" in namespace "emptydir-2320" to be "Succeeded or Failed"
May 31 12:59:36.952: INFO: Pod "pod-30063d6f-7629-4f2b-a7a0-cfe4203c80d9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.152202ms
May 31 12:59:38.976: INFO: Pod "pod-30063d6f-7629-4f2b-a7a0-cfe4203c80d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028313521s
May 31 12:59:40.983: INFO: Pod "pod-30063d6f-7629-4f2b-a7a0-cfe4203c80d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035448212s
STEP: Saw pod success
May 31 12:59:40.983: INFO: Pod "pod-30063d6f-7629-4f2b-a7a0-cfe4203c80d9" satisfied condition "Succeeded or Failed"
May 31 12:59:40.995: INFO: Trying to get logs from node kind-worker2 pod pod-30063d6f-7629-4f2b-a7a0-cfe4203c80d9 container test-container: <nil>
STEP: delete the pod
May 31 12:59:41.031: INFO: Waiting for pod pod-30063d6f-7629-4f2b-a7a0-cfe4203c80d9 to disappear
May 31 12:59:41.046: INFO: Pod pod-30063d6f-7629-4f2b-a7a0-cfe4203c80d9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
May 31 12:59:41.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2320" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":164,"skipped":2429,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 26 lines ...
  test/e2e/framework/framework.go:175
May 31 12:59:49.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8497" for this suite.
STEP: Destroying namespace "webhook-8497-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":292,"completed":165,"skipped":2443,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating secret secrets-1470/secret-test-c8817f55-30b5-4003-b927-3ffb22e27a18
STEP: Creating a pod to test consume secrets
May 31 12:59:49.462: INFO: Waiting up to 5m0s for pod "pod-configmaps-6d793d5b-c341-4721-8009-19d95fde42f6" in namespace "secrets-1470" to be "Succeeded or Failed"
May 31 12:59:49.475: INFO: Pod "pod-configmaps-6d793d5b-c341-4721-8009-19d95fde42f6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.878887ms
May 31 12:59:51.492: INFO: Pod "pod-configmaps-6d793d5b-c341-4721-8009-19d95fde42f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029389688s
May 31 12:59:53.502: INFO: Pod "pod-configmaps-6d793d5b-c341-4721-8009-19d95fde42f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039698052s
STEP: Saw pod success
May 31 12:59:53.502: INFO: Pod "pod-configmaps-6d793d5b-c341-4721-8009-19d95fde42f6" satisfied condition "Succeeded or Failed"
May 31 12:59:53.512: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-6d793d5b-c341-4721-8009-19d95fde42f6 container env-test: <nil>
STEP: delete the pod
May 31 12:59:53.564: INFO: Waiting for pod pod-configmaps-6d793d5b-c341-4721-8009-19d95fde42f6 to disappear
May 31 12:59:53.579: INFO: Pod pod-configmaps-6d793d5b-c341-4721-8009-19d95fde42f6 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
May 31 12:59:53.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1470" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":292,"completed":166,"skipped":2540,"failed":0}

------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 53 lines ...
May 31 13:02:07.050: INFO: Waiting for statefulset status.replicas updated to 0
May 31 13:02:07.059: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
May 31 13:02:07.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9017" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":292,"completed":167,"skipped":2540,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-dac84083-875f-4774-9328-eb8b2f2cc93f
STEP: Creating a pod to test consume secrets
May 31 13:02:07.211: INFO: Waiting up to 5m0s for pod "pod-secrets-141934cf-cd47-4a25-85cc-58d2d07e30c1" in namespace "secrets-1842" to be "Succeeded or Failed"
May 31 13:02:07.217: INFO: Pod "pod-secrets-141934cf-cd47-4a25-85cc-58d2d07e30c1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.509992ms
May 31 13:02:09.240: INFO: Pod "pod-secrets-141934cf-cd47-4a25-85cc-58d2d07e30c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029118057s
May 31 13:02:11.248: INFO: Pod "pod-secrets-141934cf-cd47-4a25-85cc-58d2d07e30c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037089054s
STEP: Saw pod success
May 31 13:02:11.248: INFO: Pod "pod-secrets-141934cf-cd47-4a25-85cc-58d2d07e30c1" satisfied condition "Succeeded or Failed"
May 31 13:02:11.253: INFO: Trying to get logs from node kind-worker2 pod pod-secrets-141934cf-cd47-4a25-85cc-58d2d07e30c1 container secret-volume-test: <nil>
STEP: delete the pod
May 31 13:02:11.305: INFO: Waiting for pod pod-secrets-141934cf-cd47-4a25-85cc-58d2d07e30c1 to disappear
May 31 13:02:11.310: INFO: Pod pod-secrets-141934cf-cd47-4a25-85cc-58d2d07e30c1 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
May 31 13:02:11.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1842" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":168,"skipped":2551,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 19 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
May 31 13:02:27.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9877" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":292,"completed":169,"skipped":2558,"failed":0}
SSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 22 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
May 31 13:02:37.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8342" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":292,"completed":170,"skipped":2564,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
May 31 13:02:37.135: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on node default medium
May 31 13:02:37.200: INFO: Waiting up to 5m0s for pod "pod-ff88d0e3-7fa6-4cb9-bd98-df3b52543063" in namespace "emptydir-182" to be "Succeeded or Failed"
May 31 13:02:37.210: INFO: Pod "pod-ff88d0e3-7fa6-4cb9-bd98-df3b52543063": Phase="Pending", Reason="", readiness=false. Elapsed: 9.833151ms
May 31 13:02:39.220: INFO: Pod "pod-ff88d0e3-7fa6-4cb9-bd98-df3b52543063": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019336433s
May 31 13:02:41.224: INFO: Pod "pod-ff88d0e3-7fa6-4cb9-bd98-df3b52543063": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023469416s
STEP: Saw pod success
May 31 13:02:41.224: INFO: Pod "pod-ff88d0e3-7fa6-4cb9-bd98-df3b52543063" satisfied condition "Succeeded or Failed"
May 31 13:02:41.228: INFO: Trying to get logs from node kind-worker2 pod pod-ff88d0e3-7fa6-4cb9-bd98-df3b52543063 container test-container: <nil>
STEP: delete the pod
May 31 13:02:41.248: INFO: Waiting for pod pod-ff88d0e3-7fa6-4cb9-bd98-df3b52543063 to disappear
May 31 13:02:41.252: INFO: Pod pod-ff88d0e3-7fa6-4cb9-bd98-df3b52543063 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
May 31 13:02:41.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-182" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":171,"skipped":2616,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap configmap-302/configmap-test-b2fc1434-1a8b-4a0a-82e3-fb9f865cc886
STEP: Creating a pod to test consume configMaps
May 31 13:02:41.315: INFO: Waiting up to 5m0s for pod "pod-configmaps-bd9026f0-ed80-4555-87a9-b76a4ab60566" in namespace "configmap-302" to be "Succeeded or Failed"
May 31 13:02:41.317: INFO: Pod "pod-configmaps-bd9026f0-ed80-4555-87a9-b76a4ab60566": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047309ms
May 31 13:02:43.331: INFO: Pod "pod-configmaps-bd9026f0-ed80-4555-87a9-b76a4ab60566": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01662261s
May 31 13:02:45.348: INFO: Pod "pod-configmaps-bd9026f0-ed80-4555-87a9-b76a4ab60566": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033262173s
STEP: Saw pod success
May 31 13:02:45.348: INFO: Pod "pod-configmaps-bd9026f0-ed80-4555-87a9-b76a4ab60566" satisfied condition "Succeeded or Failed"
May 31 13:02:45.359: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-bd9026f0-ed80-4555-87a9-b76a4ab60566 container env-test: <nil>
STEP: delete the pod
May 31 13:02:45.436: INFO: Waiting for pod pod-configmaps-bd9026f0-ed80-4555-87a9-b76a4ab60566 to disappear
May 31 13:02:45.444: INFO: Pod pod-configmaps-bd9026f0-ed80-4555-87a9-b76a4ab60566 no longer exists
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:175
May 31 13:02:45.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-302" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":292,"completed":172,"skipped":2644,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
May 31 13:02:45.476: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
May 31 13:02:45.610: INFO: Waiting up to 5m0s for pod "downward-api-679844d0-0efd-4cbf-9c04-befa6d0805d1" in namespace "downward-api-7840" to be "Succeeded or Failed"
May 31 13:02:45.636: INFO: Pod "downward-api-679844d0-0efd-4cbf-9c04-befa6d0805d1": Phase="Pending", Reason="", readiness=false. Elapsed: 24.891456ms
May 31 13:02:47.643: INFO: Pod "downward-api-679844d0-0efd-4cbf-9c04-befa6d0805d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032419779s
May 31 13:02:49.662: INFO: Pod "downward-api-679844d0-0efd-4cbf-9c04-befa6d0805d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051472353s
STEP: Saw pod success
May 31 13:02:49.663: INFO: Pod "downward-api-679844d0-0efd-4cbf-9c04-befa6d0805d1" satisfied condition "Succeeded or Failed"
May 31 13:02:49.675: INFO: Trying to get logs from node kind-worker2 pod downward-api-679844d0-0efd-4cbf-9c04-befa6d0805d1 container dapi-container: <nil>
STEP: delete the pod
May 31 13:02:49.738: INFO: Waiting for pod downward-api-679844d0-0efd-4cbf-9c04-befa6d0805d1 to disappear
May 31 13:02:49.762: INFO: Pod downward-api-679844d0-0efd-4cbf-9c04-befa6d0805d1 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
May 31 13:02:49.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7840" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":292,"completed":173,"skipped":2668,"failed":0}
S
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
May 31 13:02:53.944: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:02:53.950: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:02:53.976: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:02:53.986: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:02:53.994: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:02:53.999: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:02:54.010: INFO: Lookups using dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8110.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8110.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local jessie_udp@dns-test-service-2.dns-8110.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8110.svc.cluster.local]

May 31 13:02:59.015: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:02:59.020: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:02:59.027: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:02:59.032: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:02:59.042: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:02:59.046: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:02:59.048: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:02:59.051: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:02:59.059: INFO: Lookups using dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8110.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8110.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local jessie_udp@dns-test-service-2.dns-8110.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8110.svc.cluster.local]

May 31 13:03:04.028: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:03:04.041: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:03:04.058: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:03:04.076: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:03:04.135: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:03:04.155: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:03:04.172: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:03:04.188: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:03:04.204: INFO: Lookups using dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8110.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8110.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local jessie_udp@dns-test-service-2.dns-8110.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8110.svc.cluster.local]

May 31 13:03:09.015: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:03:09.020: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:03:09.027: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:03:09.032: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:03:09.047: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:03:09.050: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:03:09.054: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:03:09.058: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:03:09.069: INFO: Lookups using dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8110.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8110.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local jessie_udp@dns-test-service-2.dns-8110.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8110.svc.cluster.local]

May 31 13:03:14.019: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:03:14.028: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:03:14.034: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:03:14.042: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:03:14.058: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:03:14.064: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:03:14.073: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:03:14.083: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:03:14.099: INFO: Lookups using dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8110.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8110.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local jessie_udp@dns-test-service-2.dns-8110.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8110.svc.cluster.local]

May 31 13:03:19.014: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:03:19.019: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:03:19.024: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:03:19.030: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:03:19.040: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:03:19.044: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:03:19.047: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:03:19.050: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:03:19.056: INFO: Lookups using dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8110.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8110.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8110.svc.cluster.local jessie_udp@dns-test-service-2.dns-8110.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8110.svc.cluster.local]

May 31 13:03:24.114: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8110.svc.cluster.local from pod dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae: the server could not find the requested resource (get pods dns-test-324ed077-6f43-4867-bcd3-671006c171ae)
May 31 13:03:24.128: INFO: Lookups using dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae failed for: [jessie_udp@dns-test-service-2.dns-8110.svc.cluster.local]

May 31 13:03:29.125: INFO: DNS probes using dns-8110/dns-test-324ed077-6f43-4867-bcd3-671006c171ae succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
May 31 13:03:29.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8110" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":292,"completed":174,"skipped":2669,"failed":0}

------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
May 31 13:03:45.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6679" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":292,"completed":175,"skipped":2669,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 15 lines ...
May 31 13:03:51.406: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  test/e2e/framework/framework.go:597
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
May 31 13:04:03.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7430" for this suite.
STEP: Destroying namespace "webhook-7430-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":292,"completed":176,"skipped":2676,"failed":0}
SSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 22 lines ...
May 31 13:04:13.923: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-8361 /api/v1/namespaces/watch-8361/configmaps/e2e-watch-test-label-changed 853f74eb-5473-435b-9749-596c2688485a 21882 0 2020-05-31 13:04:03 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-05-31 13:04:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
May 31 13:04:13.923: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-8361 /api/v1/namespaces/watch-8361/configmaps/e2e-watch-test-label-changed 853f74eb-5473-435b-9749-596c2688485a 21883 0 2020-05-31 13:04:03 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-05-31 13:04:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
May 31 13:04:13.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8361" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":292,"completed":177,"skipped":2679,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
May 31 13:04:13.934: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test override command
May 31 13:04:13.984: INFO: Waiting up to 5m0s for pod "client-containers-66b8e403-076c-4dc1-8e90-393426bf33eb" in namespace "containers-173" to be "Succeeded or Failed"
May 31 13:04:13.989: INFO: Pod "client-containers-66b8e403-076c-4dc1-8e90-393426bf33eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.760043ms
May 31 13:04:15.995: INFO: Pod "client-containers-66b8e403-076c-4dc1-8e90-393426bf33eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011665584s
May 31 13:04:18.004: INFO: Pod "client-containers-66b8e403-076c-4dc1-8e90-393426bf33eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02063503s
STEP: Saw pod success
May 31 13:04:18.004: INFO: Pod "client-containers-66b8e403-076c-4dc1-8e90-393426bf33eb" satisfied condition "Succeeded or Failed"
May 31 13:04:18.011: INFO: Trying to get logs from node kind-worker2 pod client-containers-66b8e403-076c-4dc1-8e90-393426bf33eb container test-container: <nil>
STEP: delete the pod
May 31 13:04:18.035: INFO: Waiting for pod client-containers-66b8e403-076c-4dc1-8e90-393426bf33eb to disappear
May 31 13:04:18.041: INFO: Pod client-containers-66b8e403-076c-4dc1-8e90-393426bf33eb no longer exists
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
May 31 13:04:18.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-173" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":292,"completed":178,"skipped":2706,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
May 31 13:04:18.059: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on node default medium
May 31 13:04:18.113: INFO: Waiting up to 5m0s for pod "pod-6209ad7b-029b-4b58-8273-5b42512b579b" in namespace "emptydir-2342" to be "Succeeded or Failed"
May 31 13:04:18.120: INFO: Pod "pod-6209ad7b-029b-4b58-8273-5b42512b579b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.354592ms
May 31 13:04:20.131: INFO: Pod "pod-6209ad7b-029b-4b58-8273-5b42512b579b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018726112s
May 31 13:04:22.136: INFO: Pod "pod-6209ad7b-029b-4b58-8273-5b42512b579b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023050871s
STEP: Saw pod success
May 31 13:04:22.136: INFO: Pod "pod-6209ad7b-029b-4b58-8273-5b42512b579b" satisfied condition "Succeeded or Failed"
May 31 13:04:22.139: INFO: Trying to get logs from node kind-worker2 pod pod-6209ad7b-029b-4b58-8273-5b42512b579b container test-container: <nil>
STEP: delete the pod
May 31 13:04:22.160: INFO: Waiting for pod pod-6209ad7b-029b-4b58-8273-5b42512b579b to disappear
May 31 13:04:22.171: INFO: Pod pod-6209ad7b-029b-4b58-8273-5b42512b579b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
May 31 13:04:22.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2342" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":179,"skipped":2709,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 10 lines ...
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
May 31 13:04:41.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1932" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":292,"completed":180,"skipped":2731,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Job
... skipping 12 lines ...
May 31 13:04:45.799: INFO: Terminating Job.batch foo pods took: 306.61823ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:175
May 31 13:05:25.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-3556" for this suite.
•{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":292,"completed":181,"skipped":2744,"failed":0}
SSS
------------------------------
[k8s.io] Variable Expansion 
  should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 31 13:05:25.330: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating the pod with failed condition
STEP: updating the pod
May 31 13:07:25.916: INFO: Successfully updated pod "var-expansion-6dff621b-7f02-4287-a0f9-30a53617cb95"
STEP: waiting for pod running
STEP: deleting the pod gracefully
May 31 13:07:27.934: INFO: Deleting pod "var-expansion-6dff621b-7f02-4287-a0f9-30a53617cb95" in namespace "var-expansion-4628"
May 31 13:07:27.951: INFO: Wait up to 5m0s for pod "var-expansion-6dff621b-7f02-4287-a0f9-30a53617cb95" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
May 31 13:08:05.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4628" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":292,"completed":182,"skipped":2747,"failed":0}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 31 13:08:06.071: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ad136355-a064-4ba4-94d4-eea332e92522" in namespace "projected-1165" to be "Succeeded or Failed"
May 31 13:08:06.083: INFO: Pod "downwardapi-volume-ad136355-a064-4ba4-94d4-eea332e92522": Phase="Pending", Reason="", readiness=false. Elapsed: 12.261694ms
May 31 13:08:08.098: INFO: Pod "downwardapi-volume-ad136355-a064-4ba4-94d4-eea332e92522": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026456443s
May 31 13:08:10.105: INFO: Pod "downwardapi-volume-ad136355-a064-4ba4-94d4-eea332e92522": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034282664s
STEP: Saw pod success
May 31 13:08:10.106: INFO: Pod "downwardapi-volume-ad136355-a064-4ba4-94d4-eea332e92522" satisfied condition "Succeeded or Failed"
May 31 13:08:10.111: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-ad136355-a064-4ba4-94d4-eea332e92522 container client-container: <nil>
STEP: delete the pod
May 31 13:08:10.140: INFO: Waiting for pod downwardapi-volume-ad136355-a064-4ba4-94d4-eea332e92522 to disappear
May 31 13:08:10.143: INFO: Pod downwardapi-volume-ad136355-a064-4ba4-94d4-eea332e92522 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
May 31 13:08:10.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1165" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":292,"completed":183,"skipped":2749,"failed":0}
SSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-408ec99a-1a91-4d72-95d0-0c3f5826b9fd
STEP: Creating a pod to test consume secrets
May 31 13:08:10.299: INFO: Waiting up to 5m0s for pod "pod-secrets-f1e08700-ae3e-4f79-8445-81b0a407a05b" in namespace "secrets-8447" to be "Succeeded or Failed"
May 31 13:08:10.302: INFO: Pod "pod-secrets-f1e08700-ae3e-4f79-8445-81b0a407a05b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.053649ms
May 31 13:08:12.308: INFO: Pod "pod-secrets-f1e08700-ae3e-4f79-8445-81b0a407a05b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009295508s
May 31 13:08:14.318: INFO: Pod "pod-secrets-f1e08700-ae3e-4f79-8445-81b0a407a05b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01919065s
STEP: Saw pod success
May 31 13:08:14.318: INFO: Pod "pod-secrets-f1e08700-ae3e-4f79-8445-81b0a407a05b" satisfied condition "Succeeded or Failed"
May 31 13:08:14.324: INFO: Trying to get logs from node kind-worker2 pod pod-secrets-f1e08700-ae3e-4f79-8445-81b0a407a05b container secret-volume-test: <nil>
STEP: delete the pod
May 31 13:08:14.363: INFO: Waiting for pod pod-secrets-f1e08700-ae3e-4f79-8445-81b0a407a05b to disappear
May 31 13:08:14.369: INFO: Pod pod-secrets-f1e08700-ae3e-4f79-8445-81b0a407a05b no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
May 31 13:08:14.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8447" for this suite.
STEP: Destroying namespace "secret-namespace-4491" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":292,"completed":184,"skipped":2754,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 13 lines ...
May 31 13:08:16.511: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
May 31 13:08:16.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4778" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":292,"completed":185,"skipped":2772,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 46 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
May 31 13:08:45.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2726" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":292,"completed":186,"skipped":2793,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 9 lines ...
STEP: Creating the pod
May 31 13:08:50.103: INFO: Successfully updated pod "labelsupdate1fcb6822-61a3-41b4-a856-9ba5d051f9fe"
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
May 31 13:08:52.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6245" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":292,"completed":187,"skipped":2925,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
May 31 13:08:52.200: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-f5271c31-a57d-4501-ab63-9826e864fd53" in namespace "security-context-test-290" to be "Succeeded or Failed"
May 31 13:08:52.204: INFO: Pod "alpine-nnp-false-f5271c31-a57d-4501-ab63-9826e864fd53": Phase="Pending", Reason="", readiness=false. Elapsed: 3.936198ms
May 31 13:08:54.211: INFO: Pod "alpine-nnp-false-f5271c31-a57d-4501-ab63-9826e864fd53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010751746s
May 31 13:08:56.215: INFO: Pod "alpine-nnp-false-f5271c31-a57d-4501-ab63-9826e864fd53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014186607s
May 31 13:08:56.215: INFO: Pod "alpine-nnp-false-f5271c31-a57d-4501-ab63-9826e864fd53" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
May 31 13:08:56.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-290" for this suite.
•{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":188,"skipped":2952,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
May 31 13:08:56.351: INFO: Asynchronously running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl kubectl --server=https://127.0.0.1:40063 --kubeconfig=/root/.kube/kind-test-config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
May 31 13:08:56.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1722" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":292,"completed":189,"skipped":2992,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 13 lines ...
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
May 31 13:09:13.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9886" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":292,"completed":190,"skipped":3002,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicaSet
... skipping 12 lines ...
May 31 13:09:18.927: INFO: Trying to dial the pod
May 31 13:09:23.972: INFO: Controller my-hostname-basic-c5bc268b-ce44-4b52-8d35-6d07bc3ac941: Got expected result from replica 1 [my-hostname-basic-c5bc268b-ce44-4b52-8d35-6d07bc3ac941-5mvqh]: "my-hostname-basic-c5bc268b-ce44-4b52-8d35-6d07bc3ac941-5mvqh", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  test/e2e/framework/framework.go:175
May 31 13:09:23.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-4666" for this suite.
•{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":292,"completed":191,"skipped":3028,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
  test/e2e/framework/framework.go:175
May 31 13:09:29.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8900" for this suite.
STEP: Destroying namespace "webhook-8900-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":292,"completed":192,"skipped":3059,"failed":0}
SSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] Events
... skipping 16 lines ...
May 31 13:09:38.046: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  test/e2e/framework/framework.go:175
May 31 13:09:38.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-7274" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":292,"completed":193,"skipped":3062,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-downwardapi-hqmz
STEP: Creating a pod to test atomic-volume-subpath
May 31 13:09:38.164: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-hqmz" in namespace "subpath-1204" to be "Succeeded or Failed"
May 31 13:09:38.174: INFO: Pod "pod-subpath-test-downwardapi-hqmz": Phase="Pending", Reason="", readiness=false. Elapsed: 9.2845ms
May 31 13:09:40.180: INFO: Pod "pod-subpath-test-downwardapi-hqmz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015438782s
May 31 13:09:42.186: INFO: Pod "pod-subpath-test-downwardapi-hqmz": Phase="Running", Reason="", readiness=true. Elapsed: 4.021715206s
May 31 13:09:44.192: INFO: Pod "pod-subpath-test-downwardapi-hqmz": Phase="Running", Reason="", readiness=true. Elapsed: 6.027340164s
May 31 13:09:46.202: INFO: Pod "pod-subpath-test-downwardapi-hqmz": Phase="Running", Reason="", readiness=true. Elapsed: 8.037161489s
May 31 13:09:48.219: INFO: Pod "pod-subpath-test-downwardapi-hqmz": Phase="Running", Reason="", readiness=true. Elapsed: 10.054371786s
... skipping 2 lines ...
May 31 13:09:54.244: INFO: Pod "pod-subpath-test-downwardapi-hqmz": Phase="Running", Reason="", readiness=true. Elapsed: 16.079977978s
May 31 13:09:56.251: INFO: Pod "pod-subpath-test-downwardapi-hqmz": Phase="Running", Reason="", readiness=true. Elapsed: 18.086242054s
May 31 13:09:58.264: INFO: Pod "pod-subpath-test-downwardapi-hqmz": Phase="Running", Reason="", readiness=true. Elapsed: 20.099561185s
May 31 13:10:00.272: INFO: Pod "pod-subpath-test-downwardapi-hqmz": Phase="Running", Reason="", readiness=true. Elapsed: 22.107385385s
May 31 13:10:02.283: INFO: Pod "pod-subpath-test-downwardapi-hqmz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.118245102s
STEP: Saw pod success
May 31 13:10:02.283: INFO: Pod "pod-subpath-test-downwardapi-hqmz" satisfied condition "Succeeded or Failed"
May 31 13:10:02.292: INFO: Trying to get logs from node kind-worker pod pod-subpath-test-downwardapi-hqmz container test-container-subpath-downwardapi-hqmz: <nil>
STEP: delete the pod
May 31 13:10:02.367: INFO: Waiting for pod pod-subpath-test-downwardapi-hqmz to disappear
May 31 13:10:02.386: INFO: Pod pod-subpath-test-downwardapi-hqmz no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-hqmz
May 31 13:10:02.386: INFO: Deleting pod "pod-subpath-test-downwardapi-hqmz" in namespace "subpath-1204"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
May 31 13:10:02.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1204" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":292,"completed":194,"skipped":3075,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
  test/e2e/framework/framework.go:175
May 31 13:10:08.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8553" for this suite.
STEP: Destroying namespace "webhook-8553-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":292,"completed":195,"skipped":3080,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-map-e908029e-0585-4386-ac64-6901c2bb8aeb
STEP: Creating a pod to test consume secrets
May 31 13:10:08.394: INFO: Waiting up to 5m0s for pod "pod-secrets-edfbab0c-d87e-4507-9e15-c5daf9303a5b" in namespace "secrets-3997" to be "Succeeded or Failed"
May 31 13:10:08.398: INFO: Pod "pod-secrets-edfbab0c-d87e-4507-9e15-c5daf9303a5b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.961427ms
May 31 13:10:10.422: INFO: Pod "pod-secrets-edfbab0c-d87e-4507-9e15-c5daf9303a5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027429811s
May 31 13:10:12.431: INFO: Pod "pod-secrets-edfbab0c-d87e-4507-9e15-c5daf9303a5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036483074s
STEP: Saw pod success
May 31 13:10:12.431: INFO: Pod "pod-secrets-edfbab0c-d87e-4507-9e15-c5daf9303a5b" satisfied condition "Succeeded or Failed"
May 31 13:10:12.436: INFO: Trying to get logs from node kind-worker pod pod-secrets-edfbab0c-d87e-4507-9e15-c5daf9303a5b container secret-volume-test: <nil>
STEP: delete the pod
May 31 13:10:12.456: INFO: Waiting for pod pod-secrets-edfbab0c-d87e-4507-9e15-c5daf9303a5b to disappear
May 31 13:10:12.461: INFO: Pod pod-secrets-edfbab0c-d87e-4507-9e15-c5daf9303a5b no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
May 31 13:10:12.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3997" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":292,"completed":196,"skipped":3089,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 12 lines ...
STEP: Creating configMap with name cm-test-opt-create-b378236a-d823-47b6-8314-9578d99f445c
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
May 31 13:11:29.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8574" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":197,"skipped":3096,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 8 lines ...
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
May 31 13:11:36.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8286" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":292,"completed":198,"skipped":3124,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should run through the lifecycle of a ServiceAccount [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 10 lines ...
STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector)
STEP: deleting the ServiceAccount
[AfterEach] [sig-auth] ServiceAccounts
  test/e2e/framework/framework.go:175
May 31 13:11:36.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-8240" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":292,"completed":199,"skipped":3135,"failed":0}
SSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 22 lines ...
May 31 13:11:41.616: INFO: Pod "test-cleanup-controller-n7l2v" is available:
&Pod{ObjectMeta:{test-cleanup-controller-n7l2v test-cleanup-controller- deployment-4437 /api/v1/namespaces/deployment-4437/pods/test-cleanup-controller-n7l2v df87393c-88a3-440b-9220-e183cfac83b5 23985 0 2020-05-31 13:11:36 +0000 UTC <nil> <nil> map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller e2429ffd-c3ce-4f3d-bce6-40ab15ce3870 0xc005741db7 0xc005741db8}] []  [{kube-controller-manager Update v1 2020-05-31 13:11:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e2429ffd-c3ce-4f3d-bce6-40ab15ce3870\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-31 13:11:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.233\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rvgnf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rvgnf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rvgnf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 13:11:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 13:11:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 13:11:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 13:11:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.233,StartTime:2020-05-31 13:11:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-31 13:11:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://847d3ecb041e30386634efa8a6f65c5485c8d9070479240c5d88c62a12820a78,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.233,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
May 31 13:11:41.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4437" for this suite.
•{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":292,"completed":200,"skipped":3140,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
May 31 13:11:41.747: INFO: >>> kubeConfig: /root/.kube/kind-test-config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
May 31 13:11:47.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6021" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":292,"completed":201,"skipped":3155,"failed":0}
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 74 lines ...
May 31 13:12:15.335: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1841/pods","resourceVersion":"24252"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
May 31 13:12:15.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1841" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":292,"completed":202,"skipped":3156,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 27 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
May 31 13:12:29.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4525" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":292,"completed":203,"skipped":3167,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
May 31 13:12:29.800: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on tmpfs
May 31 13:12:29.846: INFO: Waiting up to 5m0s for pod "pod-0217fb6f-4269-454e-9163-8a66ba5c250a" in namespace "emptydir-4205" to be "Succeeded or Failed"
May 31 13:12:29.850: INFO: Pod "pod-0217fb6f-4269-454e-9163-8a66ba5c250a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.167652ms
May 31 13:12:31.870: INFO: Pod "pod-0217fb6f-4269-454e-9163-8a66ba5c250a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023688276s
May 31 13:12:33.875: INFO: Pod "pod-0217fb6f-4269-454e-9163-8a66ba5c250a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028590864s
STEP: Saw pod success
May 31 13:12:33.875: INFO: Pod "pod-0217fb6f-4269-454e-9163-8a66ba5c250a" satisfied condition "Succeeded or Failed"
May 31 13:12:33.879: INFO: Trying to get logs from node kind-worker2 pod pod-0217fb6f-4269-454e-9163-8a66ba5c250a container test-container: <nil>
STEP: delete the pod
May 31 13:12:33.910: INFO: Waiting for pod pod-0217fb6f-4269-454e-9163-8a66ba5c250a to disappear
May 31 13:12:33.914: INFO: Pod pod-0217fb6f-4269-454e-9163-8a66ba5c250a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
May 31 13:12:33.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4205" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":204,"skipped":3278,"failed":0}
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 31 13:12:33.978: INFO: Waiting up to 5m0s for pod "downwardapi-volume-00a0fe93-026f-461a-aec3-5957ff678205" in namespace "projected-860" to be "Succeeded or Failed"
May 31 13:12:33.984: INFO: Pod "downwardapi-volume-00a0fe93-026f-461a-aec3-5957ff678205": Phase="Pending", Reason="", readiness=false. Elapsed: 5.222398ms
May 31 13:12:36.000: INFO: Pod "downwardapi-volume-00a0fe93-026f-461a-aec3-5957ff678205": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02156396s
May 31 13:12:38.011: INFO: Pod "downwardapi-volume-00a0fe93-026f-461a-aec3-5957ff678205": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032387313s
STEP: Saw pod success
May 31 13:12:38.011: INFO: Pod "downwardapi-volume-00a0fe93-026f-461a-aec3-5957ff678205" satisfied condition "Succeeded or Failed"
May 31 13:12:38.032: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-00a0fe93-026f-461a-aec3-5957ff678205 container client-container: <nil>
STEP: delete the pod
May 31 13:12:38.088: INFO: Waiting for pod downwardapi-volume-00a0fe93-026f-461a-aec3-5957ff678205 to disappear
May 31 13:12:38.095: INFO: Pod downwardapi-volume-00a0fe93-026f-461a-aec3-5957ff678205 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
May 31 13:12:38.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-860" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":292,"completed":205,"skipped":3279,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 31 13:12:38.139: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name secret-emptykey-test-698f9190-eac8-47c0-a60f-9014c5552664
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
May 31 13:12:38.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2303" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":292,"completed":206,"skipped":3350,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 9 lines ...
STEP: Creating the pod
May 31 13:12:42.971: INFO: Successfully updated pod "annotationupdate9d3e885a-d44e-4602-a90f-b87825e6a2c0"
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
May 31 13:12:47.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3254" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":292,"completed":207,"skipped":3369,"failed":0}
SSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 12 lines ...
STEP: reading a file in the container
May 31 13:12:53.194: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl exec --namespace=svcaccounts-6320 pod-service-account-5d992232-9e0d-4499-a779-27503a036f9f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  test/e2e/framework/framework.go:175
May 31 13:12:53.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-6320" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":292,"completed":208,"skipped":3372,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] 
  evicts pods with minTolerationSeconds [Disruptive] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
... skipping 19 lines ...
May 31 13:14:35.263: INFO: Noticed Pod "taint-eviction-b2" gets evicted.
STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute
[AfterEach] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
  test/e2e/framework/framework.go:175
May 31 13:14:35.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "taint-multiple-pods-7102" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","total":292,"completed":209,"skipped":3432,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 60 lines ...
May 31 13:14:46.298: INFO: stderr: ""
May 31 13:14:46.298: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
May 31 13:14:46.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5148" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":292,"completed":210,"skipped":3464,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 13 lines ...
May 31 13:15:44.765: INFO: Restart count of pod container-probe-6652/busybox-331e9aa9-318e-47f8-ba67-57927ef34e12 is now 1 (54.220922515s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
May 31 13:15:44.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6652" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":292,"completed":211,"skipped":3472,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-a3eb98f7-a18a-4c4a-a8a0-c96edc038892
STEP: Creating a pod to test consume configMaps
May 31 13:15:44.899: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ab512f45-e58e-43e5-8c5a-1c7fcdbd8cdc" in namespace "projected-5189" to be "Succeeded or Failed"
May 31 13:15:44.906: INFO: Pod "pod-projected-configmaps-ab512f45-e58e-43e5-8c5a-1c7fcdbd8cdc": Phase="Pending", Reason="", readiness=false. Elapsed: 7.409228ms
May 31 13:15:46.915: INFO: Pod "pod-projected-configmaps-ab512f45-e58e-43e5-8c5a-1c7fcdbd8cdc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016538441s
May 31 13:15:48.927: INFO: Pod "pod-projected-configmaps-ab512f45-e58e-43e5-8c5a-1c7fcdbd8cdc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028078306s
STEP: Saw pod success
May 31 13:15:48.927: INFO: Pod "pod-projected-configmaps-ab512f45-e58e-43e5-8c5a-1c7fcdbd8cdc" satisfied condition "Succeeded or Failed"
May 31 13:15:48.939: INFO: Trying to get logs from node kind-worker2 pod pod-projected-configmaps-ab512f45-e58e-43e5-8c5a-1c7fcdbd8cdc container projected-configmap-volume-test: <nil>
STEP: delete the pod
May 31 13:15:49.022: INFO: Waiting for pod pod-projected-configmaps-ab512f45-e58e-43e5-8c5a-1c7fcdbd8cdc to disappear
May 31 13:15:49.028: INFO: Pod pod-projected-configmaps-ab512f45-e58e-43e5-8c5a-1c7fcdbd8cdc no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
May 31 13:15:49.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5189" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":292,"completed":212,"skipped":3488,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
May 31 13:16:00.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6361" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":292,"completed":213,"skipped":3512,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 31 13:16:00.235: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
May 31 13:18:00.326: INFO: Deleting pod "var-expansion-e2e04681-83e3-4cc0-b614-96f388ebf114" in namespace "var-expansion-9757"
May 31 13:18:00.334: INFO: Wait up to 5m0s for pod "var-expansion-e2e04681-83e3-4cc0-b614-96f388ebf114" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
May 31 13:18:06.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9757" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":292,"completed":214,"skipped":3520,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
May 31 13:18:06.783: INFO: stderr: ""
May 31 13:18:06.783: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-beta.0.313+46d08c89ab9f55\", GitCommit:\"46d08c89ab9f55bcaf23f0aa3742c53a72a7418a\", GitTreeState:\"clean\", BuildDate:\"2020-05-31T06:25:53Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-beta.0.313+46d08c89ab9f55\", GitCommit:\"46d08c89ab9f55bcaf23f0aa3742c53a72a7418a\", GitTreeState:\"clean\", BuildDate:\"2020-05-31T06:25:53Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
May 31 13:18:06.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5038" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":292,"completed":215,"skipped":3544,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 31 13:18:06.840: INFO: Waiting up to 5m0s for pod "downwardapi-volume-08ecc747-72fd-4d5f-b230-d009fce8c304" in namespace "downward-api-8821" to be "Succeeded or Failed"
May 31 13:18:06.844: INFO: Pod "downwardapi-volume-08ecc747-72fd-4d5f-b230-d009fce8c304": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060714ms
May 31 13:18:08.849: INFO: Pod "downwardapi-volume-08ecc747-72fd-4d5f-b230-d009fce8c304": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008670855s
May 31 13:18:10.860: INFO: Pod "downwardapi-volume-08ecc747-72fd-4d5f-b230-d009fce8c304": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019867027s
STEP: Saw pod success
May 31 13:18:10.860: INFO: Pod "downwardapi-volume-08ecc747-72fd-4d5f-b230-d009fce8c304" satisfied condition "Succeeded or Failed"
May 31 13:18:10.868: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-08ecc747-72fd-4d5f-b230-d009fce8c304 container client-container: <nil>
STEP: delete the pod
May 31 13:18:10.900: INFO: Waiting for pod downwardapi-volume-08ecc747-72fd-4d5f-b230-d009fce8c304 to disappear
May 31 13:18:10.903: INFO: Pod downwardapi-volume-08ecc747-72fd-4d5f-b230-d009fce8c304 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
May 31 13:18:10.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8821" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":292,"completed":216,"skipped":3567,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
May 31 13:18:15.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8463" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":292,"completed":217,"skipped":3589,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] PreStop
... skipping 26 lines ...
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  test/e2e/framework/framework.go:175
May 31 13:18:28.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-7803" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":292,"completed":218,"skipped":3618,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 46 lines ...
May 31 13:18:51.724: INFO: Pod "test-rollover-deployment-7c4fd9c879-rf7gv" is available:
&Pod{ObjectMeta:{test-rollover-deployment-7c4fd9c879-rf7gv test-rollover-deployment-7c4fd9c879- deployment-1447 /api/v1/namespaces/deployment-1447/pods/test-rollover-deployment-7c4fd9c879-rf7gv 7774eda0-492b-4aac-b26d-ea76b73e7f6a 26017 0 2020-05-31 13:18:37 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [{apps/v1 ReplicaSet test-rollover-deployment-7c4fd9c879 e1423f7b-126b-4f4c-9dfa-29936b1f94ff 0xc002d47817 0xc002d47818}] []  [{kube-controller-manager Update v1 2020-05-31 13:18:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e1423f7b-126b-4f4c-9dfa-29936b1f94ff\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-31 13:18:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.94\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j9j5d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j9j5d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j9j5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 13:18:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 13:18:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 13:18:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 13:18:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.94,StartTime:2020-05-31 13:18:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-31 13:18:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://9401e3ec974cc36705955a1ef9ef4994754d5e1db01b59fd71ac40b5af7c327c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.94,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
May 31 13:18:51.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1447" for this suite.
•{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":292,"completed":219,"skipped":3653,"failed":0}
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-c5a42115-2b9b-4188-bd0b-469368f81798
STEP: Creating a pod to test consume configMaps
May 31 13:18:51.919: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0f00dcfe-e009-4a57-a6f8-218a57ce0db8" in namespace "projected-3180" to be "Succeeded or Failed"
May 31 13:18:51.942: INFO: Pod "pod-projected-configmaps-0f00dcfe-e009-4a57-a6f8-218a57ce0db8": Phase="Pending", Reason="", readiness=false. Elapsed: 22.519308ms
May 31 13:18:53.951: INFO: Pod "pod-projected-configmaps-0f00dcfe-e009-4a57-a6f8-218a57ce0db8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031617206s
May 31 13:18:55.964: INFO: Pod "pod-projected-configmaps-0f00dcfe-e009-4a57-a6f8-218a57ce0db8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044073558s
STEP: Saw pod success
May 31 13:18:55.964: INFO: Pod "pod-projected-configmaps-0f00dcfe-e009-4a57-a6f8-218a57ce0db8" satisfied condition "Succeeded or Failed"
May 31 13:18:55.975: INFO: Trying to get logs from node kind-worker2 pod pod-projected-configmaps-0f00dcfe-e009-4a57-a6f8-218a57ce0db8 container projected-configmap-volume-test: <nil>
STEP: delete the pod
May 31 13:18:56.022: INFO: Waiting for pod pod-projected-configmaps-0f00dcfe-e009-4a57-a6f8-218a57ce0db8 to disappear
May 31 13:18:56.031: INFO: Pod pod-projected-configmaps-0f00dcfe-e009-4a57-a6f8-218a57ce0db8 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
May 31 13:18:56.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3180" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":220,"skipped":3656,"failed":0}
S
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 34 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
May 31 13:18:57.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7612" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":292,"completed":221,"skipped":3657,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 9 lines ...
STEP: Updating configmap projected-configmap-test-upd-d78c22d8-a3f7-4caf-8431-f6ffcfb8e7c5
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
May 31 13:20:05.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4508" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":222,"skipped":3708,"failed":0}

------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Aggregator
... skipping 21 lines ...
[AfterEach] [sig-api-machinery] Aggregator
  test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  test/e2e/framework/framework.go:175
May 31 13:20:28.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-4114" for this suite.
•{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":292,"completed":223,"skipped":3708,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 13 lines ...
May 31 13:20:32.988: INFO: Trying to dial the pod
May 31 13:20:38.020: INFO: Controller my-hostname-basic-956424a9-9e5f-48c0-8ed1-5138c556b720: Got expected result from replica 1 [my-hostname-basic-956424a9-9e5f-48c0-8ed1-5138c556b720-rk9vn]: "my-hostname-basic-956424a9-9e5f-48c0-8ed1-5138c556b720-rk9vn", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
May 31 13:20:38.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3417" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":292,"completed":224,"skipped":3719,"failed":0}
SSS
------------------------------
[k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] 
  removing taint cancels eviction [Disruptive] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial]
... skipping 20 lines ...
STEP: Waiting some time to make sure that toleration time passed.
May 31 13:22:53.435: INFO: Pod wasn't evicted. Test successful
[AfterEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial]
  test/e2e/framework/framework.go:175
May 31 13:22:53.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "taint-single-pod-6669" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]","total":292,"completed":225,"skipped":3722,"failed":0}
S
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicaSet
... skipping 11 lines ...
May 31 13:22:58.615: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  test/e2e/framework/framework.go:175
May 31 13:22:59.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-7019" for this suite.
•{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":292,"completed":226,"skipped":3723,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
May 31 13:22:59.687: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on node default medium
May 31 13:22:59.803: INFO: Waiting up to 5m0s for pod "pod-d0bf6264-1776-4bda-ac45-c7c3d0674304" in namespace "emptydir-7434" to be "Succeeded or Failed"
May 31 13:22:59.809: INFO: Pod "pod-d0bf6264-1776-4bda-ac45-c7c3d0674304": Phase="Pending", Reason="", readiness=false. Elapsed: 5.342466ms
May 31 13:23:01.830: INFO: Pod "pod-d0bf6264-1776-4bda-ac45-c7c3d0674304": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02721759s
May 31 13:23:03.845: INFO: Pod "pod-d0bf6264-1776-4bda-ac45-c7c3d0674304": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041461766s
STEP: Saw pod success
May 31 13:23:03.845: INFO: Pod "pod-d0bf6264-1776-4bda-ac45-c7c3d0674304" satisfied condition "Succeeded or Failed"
May 31 13:23:03.856: INFO: Trying to get logs from node kind-worker pod pod-d0bf6264-1776-4bda-ac45-c7c3d0674304 container test-container: <nil>
STEP: delete the pod
May 31 13:23:03.910: INFO: Waiting for pod pod-d0bf6264-1776-4bda-ac45-c7c3d0674304 to disappear
May 31 13:23:03.914: INFO: Pod pod-d0bf6264-1776-4bda-ac45-c7c3d0674304 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
May 31 13:23:03.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7434" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":227,"skipped":3725,"failed":0}
SSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 7 lines ...
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
May 31 13:24:03.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3994" for this suite.
•{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":292,"completed":228,"skipped":3729,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 48 lines ...
May 31 13:24:12.538: INFO: stderr: ""
May 31 13:24:12.539: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
May 31 13:24:12.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8229" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":292,"completed":229,"skipped":3754,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 28 lines ...
  test/e2e/framework/framework.go:175
May 31 13:24:28.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5800" for this suite.
STEP: Destroying namespace "webhook-5800-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":292,"completed":230,"skipped":3773,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  test/e2e/framework/framework.go:175
May 31 13:24:34.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8821" for this suite.
STEP: Destroying namespace "webhook-8821-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":292,"completed":231,"skipped":3796,"failed":0}
SSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
May 31 13:24:34.728: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test env composition
May 31 13:24:34.836: INFO: Waiting up to 5m0s for pod "var-expansion-04f5b800-992c-4744-bc6d-05201ab1c38e" in namespace "var-expansion-493" to be "Succeeded or Failed"
May 31 13:24:34.868: INFO: Pod "var-expansion-04f5b800-992c-4744-bc6d-05201ab1c38e": Phase="Pending", Reason="", readiness=false. Elapsed: 31.93517ms
May 31 13:24:36.872: INFO: Pod "var-expansion-04f5b800-992c-4744-bc6d-05201ab1c38e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035924224s
May 31 13:24:38.884: INFO: Pod "var-expansion-04f5b800-992c-4744-bc6d-05201ab1c38e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047566536s
STEP: Saw pod success
May 31 13:24:38.884: INFO: Pod "var-expansion-04f5b800-992c-4744-bc6d-05201ab1c38e" satisfied condition "Succeeded or Failed"
May 31 13:24:38.888: INFO: Trying to get logs from node kind-worker2 pod var-expansion-04f5b800-992c-4744-bc6d-05201ab1c38e container dapi-container: <nil>
STEP: delete the pod
May 31 13:24:38.936: INFO: Waiting for pod var-expansion-04f5b800-992c-4744-bc6d-05201ab1c38e to disappear
May 31 13:24:38.948: INFO: Pod var-expansion-04f5b800-992c-4744-bc6d-05201ab1c38e no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
May 31 13:24:38.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-493" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":292,"completed":232,"skipped":3799,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 36 lines ...
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
May 31 13:24:40.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4853" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":292,"completed":233,"skipped":3813,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 52 lines ...
May 31 13:24:52.046: INFO: stderr: ""
May 31 13:24:52.046: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
May 31 13:24:52.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9388" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":292,"completed":234,"skipped":3828,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
May 31 13:24:59.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-725" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/crd_conversion_webhook.go:137
•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":292,"completed":235,"skipped":3834,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 25 lines ...
  test/e2e/framework/framework.go:175
May 31 13:25:05.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9719" for this suite.
STEP: Destroying namespace "webhook-9719-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":292,"completed":236,"skipped":3847,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 26 lines ...
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May 31 13:25:13.575: INFO: File wheezy_udp@dns-test-service-3.dns-4625.svc.cluster.local from pod  dns-4625/dns-test-c3d8bda8-6ebf-4b96-b4b0-63251928a128 contains 'foo.example.com.
' instead of 'bar.example.com.'
May 31 13:25:13.602: INFO: File jessie_udp@dns-test-service-3.dns-4625.svc.cluster.local from pod  dns-4625/dns-test-c3d8bda8-6ebf-4b96-b4b0-63251928a128 contains 'foo.example.com.
' instead of 'bar.example.com.'
May 31 13:25:13.602: INFO: Lookups using dns-4625/dns-test-c3d8bda8-6ebf-4b96-b4b0-63251928a128 failed for: [wheezy_udp@dns-test-service-3.dns-4625.svc.cluster.local jessie_udp@dns-test-service-3.dns-4625.svc.cluster.local]

May 31 13:25:18.615: INFO: File wheezy_udp@dns-test-service-3.dns-4625.svc.cluster.local from pod  dns-4625/dns-test-c3d8bda8-6ebf-4b96-b4b0-63251928a128 contains 'foo.example.com.
' instead of 'bar.example.com.'
May 31 13:25:18.623: INFO: File jessie_udp@dns-test-service-3.dns-4625.svc.cluster.local from pod  dns-4625/dns-test-c3d8bda8-6ebf-4b96-b4b0-63251928a128 contains 'foo.example.com.
' instead of 'bar.example.com.'
May 31 13:25:18.623: INFO: Lookups using dns-4625/dns-test-c3d8bda8-6ebf-4b96-b4b0-63251928a128 failed for: [wheezy_udp@dns-test-service-3.dns-4625.svc.cluster.local jessie_udp@dns-test-service-3.dns-4625.svc.cluster.local]

May 31 13:25:23.611: INFO: File wheezy_udp@dns-test-service-3.dns-4625.svc.cluster.local from pod  dns-4625/dns-test-c3d8bda8-6ebf-4b96-b4b0-63251928a128 contains 'foo.example.com.
' instead of 'bar.example.com.'
May 31 13:25:23.615: INFO: File jessie_udp@dns-test-service-3.dns-4625.svc.cluster.local from pod  dns-4625/dns-test-c3d8bda8-6ebf-4b96-b4b0-63251928a128 contains 'foo.example.com.
' instead of 'bar.example.com.'
May 31 13:25:23.615: INFO: Lookups using dns-4625/dns-test-c3d8bda8-6ebf-4b96-b4b0-63251928a128 failed for: [wheezy_udp@dns-test-service-3.dns-4625.svc.cluster.local jessie_udp@dns-test-service-3.dns-4625.svc.cluster.local]

May 31 13:25:28.608: INFO: File wheezy_udp@dns-test-service-3.dns-4625.svc.cluster.local from pod  dns-4625/dns-test-c3d8bda8-6ebf-4b96-b4b0-63251928a128 contains 'foo.example.com.
' instead of 'bar.example.com.'
May 31 13:25:28.616: INFO: File jessie_udp@dns-test-service-3.dns-4625.svc.cluster.local from pod  dns-4625/dns-test-c3d8bda8-6ebf-4b96-b4b0-63251928a128 contains 'foo.example.com.
' instead of 'bar.example.com.'
May 31 13:25:28.616: INFO: Lookups using dns-4625/dns-test-c3d8bda8-6ebf-4b96-b4b0-63251928a128 failed for: [wheezy_udp@dns-test-service-3.dns-4625.svc.cluster.local jessie_udp@dns-test-service-3.dns-4625.svc.cluster.local]

May 31 13:25:33.612: INFO: File wheezy_udp@dns-test-service-3.dns-4625.svc.cluster.local from pod  dns-4625/dns-test-c3d8bda8-6ebf-4b96-b4b0-63251928a128 contains 'foo.example.com.
' instead of 'bar.example.com.'
May 31 13:25:33.619: INFO: File jessie_udp@dns-test-service-3.dns-4625.svc.cluster.local from pod  dns-4625/dns-test-c3d8bda8-6ebf-4b96-b4b0-63251928a128 contains 'foo.example.com.
' instead of 'bar.example.com.'
May 31 13:25:33.619: INFO: Lookups using dns-4625/dns-test-c3d8bda8-6ebf-4b96-b4b0-63251928a128 failed for: [wheezy_udp@dns-test-service-3.dns-4625.svc.cluster.local jessie_udp@dns-test-service-3.dns-4625.svc.cluster.local]

May 31 13:25:38.607: INFO: File wheezy_udp@dns-test-service-3.dns-4625.svc.cluster.local from pod  dns-4625/dns-test-c3d8bda8-6ebf-4b96-b4b0-63251928a128 contains 'foo.example.com.
' instead of 'bar.example.com.'
May 31 13:25:38.612: INFO: File jessie_udp@dns-test-service-3.dns-4625.svc.cluster.local from pod  dns-4625/dns-test-c3d8bda8-6ebf-4b96-b4b0-63251928a128 contains 'foo.example.com.
' instead of 'bar.example.com.'
May 31 13:25:38.612: INFO: Lookups using dns-4625/dns-test-c3d8bda8-6ebf-4b96-b4b0-63251928a128 failed for: [wheezy_udp@dns-test-service-3.dns-4625.svc.cluster.local jessie_udp@dns-test-service-3.dns-4625.svc.cluster.local]

May 31 13:25:43.614: INFO: DNS probes using dns-test-c3d8bda8-6ebf-4b96-b4b0-63251928a128 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4625.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4625.svc.cluster.local; sleep 1; done
... skipping 9 lines ...
STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
May 31 13:25:47.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4625" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":292,"completed":237,"skipped":3862,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
May 31 13:25:47.995: INFO: Asynchronously running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl kubectl --server=https://127.0.0.1:40063 --kubeconfig=/root/.kube/kind-test-config proxy --unix-socket=/tmp/kubectl-proxy-unix400832908/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
May 31 13:25:48.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4137" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":292,"completed":238,"skipped":3891,"failed":0}
SSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 26 lines ...
May 31 13:26:09.683: INFO: >>> kubeConfig: /root/.kube/kind-test-config
May 31 13:26:10.870: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
May 31 13:26:10.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-149" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":239,"skipped":3898,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
May 31 13:26:10.928: INFO: >>> kubeConfig: /root/.kube/kind-test-config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
May 31 13:26:11.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-1665" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":292,"completed":240,"skipped":3906,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] LimitRange 
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] LimitRange
... skipping 31 lines ...
May 31 13:26:18.671: INFO: limitRange is already deleted
STEP: Creating a Pod with more than former max resources
[AfterEach] [sig-scheduling] LimitRange
  test/e2e/framework/framework.go:175
May 31 13:26:18.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "limitrange-5765" for this suite.
•{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":292,"completed":241,"skipped":3924,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 28 lines ...
May 31 13:26:28.842: INFO: stderr: ""
May 31 13:26:28.842: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
May 31 13:26:28.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-707" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":292,"completed":242,"skipped":3933,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 10 lines ...
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
May 31 13:26:35.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5595" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":292,"completed":243,"skipped":3942,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-e5e1cdeb-2af6-4002-b24b-33c0188af31c
STEP: Creating a pod to test consume configMaps
May 31 13:26:36.075: INFO: Waiting up to 5m0s for pod "pod-configmaps-c0f8c92e-1896-4a25-98e0-26f116efdd1d" in namespace "configmap-3294" to be "Succeeded or Failed"
May 31 13:26:36.085: INFO: Pod "pod-configmaps-c0f8c92e-1896-4a25-98e0-26f116efdd1d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.180929ms
May 31 13:26:38.091: INFO: Pod "pod-configmaps-c0f8c92e-1896-4a25-98e0-26f116efdd1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015587033s
May 31 13:26:40.097: INFO: Pod "pod-configmaps-c0f8c92e-1896-4a25-98e0-26f116efdd1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021059664s
STEP: Saw pod success
May 31 13:26:40.097: INFO: Pod "pod-configmaps-c0f8c92e-1896-4a25-98e0-26f116efdd1d" satisfied condition "Succeeded or Failed"
May 31 13:26:40.100: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-c0f8c92e-1896-4a25-98e0-26f116efdd1d container configmap-volume-test: <nil>
STEP: delete the pod
May 31 13:26:40.139: INFO: Waiting for pod pod-configmaps-c0f8c92e-1896-4a25-98e0-26f116efdd1d to disappear
May 31 13:26:40.144: INFO: Pod pod-configmaps-c0f8c92e-1896-4a25-98e0-26f116efdd1d no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
May 31 13:26:40.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3294" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":292,"completed":244,"skipped":3954,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
May 31 13:26:40.199: INFO: >>> kubeConfig: /root/.kube/kind-test-config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
May 31 13:26:41.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9703" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":292,"completed":245,"skipped":4022,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 31 13:26:41.614: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3c873373-112b-4b56-aa9e-02b361b6a39e" in namespace "projected-8646" to be "Succeeded or Failed"
May 31 13:26:41.622: INFO: Pod "downwardapi-volume-3c873373-112b-4b56-aa9e-02b361b6a39e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.394733ms
May 31 13:26:43.639: INFO: Pod "downwardapi-volume-3c873373-112b-4b56-aa9e-02b361b6a39e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024910615s
May 31 13:26:45.651: INFO: Pod "downwardapi-volume-3c873373-112b-4b56-aa9e-02b361b6a39e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037540389s
STEP: Saw pod success
May 31 13:26:45.654: INFO: Pod "downwardapi-volume-3c873373-112b-4b56-aa9e-02b361b6a39e" satisfied condition "Succeeded or Failed"
May 31 13:26:45.676: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-3c873373-112b-4b56-aa9e-02b361b6a39e container client-container: <nil>
STEP: delete the pod
May 31 13:26:45.747: INFO: Waiting for pod downwardapi-volume-3c873373-112b-4b56-aa9e-02b361b6a39e to disappear
May 31 13:26:45.756: INFO: Pod downwardapi-volume-3c873373-112b-4b56-aa9e-02b361b6a39e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
May 31 13:26:45.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8646" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":246,"skipped":4040,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 31 13:26:45.891: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d8641cf3-295c-406c-bdfa-15f74875f722" in namespace "downward-api-7617" to be "Succeeded or Failed"
May 31 13:26:45.906: INFO: Pod "downwardapi-volume-d8641cf3-295c-406c-bdfa-15f74875f722": Phase="Pending", Reason="", readiness=false. Elapsed: 14.424396ms
May 31 13:26:47.909: INFO: Pod "downwardapi-volume-d8641cf3-295c-406c-bdfa-15f74875f722": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01763885s
May 31 13:26:49.913: INFO: Pod "downwardapi-volume-d8641cf3-295c-406c-bdfa-15f74875f722": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021374045s
STEP: Saw pod success
May 31 13:26:49.913: INFO: Pod "downwardapi-volume-d8641cf3-295c-406c-bdfa-15f74875f722" satisfied condition "Succeeded or Failed"
May 31 13:26:49.925: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-d8641cf3-295c-406c-bdfa-15f74875f722 container client-container: <nil>
STEP: delete the pod
May 31 13:26:49.955: INFO: Waiting for pod downwardapi-volume-d8641cf3-295c-406c-bdfa-15f74875f722 to disappear
May 31 13:26:49.968: INFO: Pod downwardapi-volume-d8641cf3-295c-406c-bdfa-15f74875f722 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
May 31 13:26:49.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7617" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":292,"completed":247,"skipped":4066,"failed":0}
SSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
May 31 13:26:54.071: INFO: Initial restart count of pod liveness-f8990520-43de-40a3-9fab-2a05b9233ab4 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
May 31 13:30:55.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6021" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":292,"completed":248,"skipped":4071,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 64 lines ...
May 31 13:31:15.340: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9689/pods","resourceVersion":"29527"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
May 31 13:31:15.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9689" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":292,"completed":249,"skipped":4092,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 31 13:31:15.386: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:162
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating the pod
May 31 13:31:15.455: INFO: PodSpec: initContainers in spec.initContainers
May 31 13:32:11.158: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-67bd506b-3a29-41af-9cd5-5c14fb96996e", GenerateName:"", Namespace:"init-container-5560", SelfLink:"/api/v1/namespaces/init-container-5560/pods/pod-init-67bd506b-3a29-41af-9cd5-5c14fb96996e", UID:"913efa57-e9d0-4c70-bac8-da9d274265c1", ResourceVersion:"29744", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63726528675, loc:(*time.Location)(0x8006d20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"455811163"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002c9eaa0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002c9eac0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002c9eae0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002c9eb00)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-52zmn", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00547e040), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-52zmn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-52zmn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-52zmn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc004c36058), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kind-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000a95110), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004c360f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004c36110)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc004c36118), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc004c3611c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726528675, loc:(*time.Location)(0x8006d20)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726528675, loc:(*time.Location)(0x8006d20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726528675, loc:(*time.Location)(0x8006d20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726528675, loc:(*time.Location)(0x8006d20)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.4", PodIP:"10.244.2.32", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.32"}}, StartTime:(*v1.Time)(0xc002c9eb20), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000a951f0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://a4452a8be9ceff1282465caa056f3ee3223d94ab3dda801e89fbdb0614bfbecb", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002c9eb60), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002c9eb40), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc004c3619f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
May 31 13:32:11.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5560" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":292,"completed":250,"skipped":4105,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-map-4ed2c1ff-f998-4b89-a8fb-2e35c7dd2984
STEP: Creating a pod to test consume secrets
May 31 13:32:11.219: INFO: Waiting up to 5m0s for pod "pod-secrets-c4a47288-1b10-454f-a99c-96ed83d1a570" in namespace "secrets-7274" to be "Succeeded or Failed"
May 31 13:32:11.223: INFO: Pod "pod-secrets-c4a47288-1b10-454f-a99c-96ed83d1a570": Phase="Pending", Reason="", readiness=false. Elapsed: 4.509628ms
May 31 13:32:13.231: INFO: Pod "pod-secrets-c4a47288-1b10-454f-a99c-96ed83d1a570": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012580501s
May 31 13:32:15.240: INFO: Pod "pod-secrets-c4a47288-1b10-454f-a99c-96ed83d1a570": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020946041s
STEP: Saw pod success
May 31 13:32:15.240: INFO: Pod "pod-secrets-c4a47288-1b10-454f-a99c-96ed83d1a570" satisfied condition "Succeeded or Failed"
May 31 13:32:15.247: INFO: Trying to get logs from node kind-worker2 pod pod-secrets-c4a47288-1b10-454f-a99c-96ed83d1a570 container secret-volume-test: <nil>
STEP: delete the pod
May 31 13:32:15.305: INFO: Waiting for pod pod-secrets-c4a47288-1b10-454f-a99c-96ed83d1a570 to disappear
May 31 13:32:15.311: INFO: Pod pod-secrets-c4a47288-1b10-454f-a99c-96ed83d1a570 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
May 31 13:32:15.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7274" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":251,"skipped":4122,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 2 lines ...
May 31 13:32:15.336: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
May 31 13:32:18.465: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
May 31 13:32:18.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4579" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":292,"completed":252,"skipped":4148,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 21 lines ...
May 31 13:32:34.649: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
May 31 13:32:34.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-5621" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":292,"completed":253,"skipped":4167,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 31 13:32:34.756: INFO: Waiting up to 5m0s for pod "downwardapi-volume-76f11da8-d7d9-4f2f-b24a-2a64e7012048" in namespace "projected-4664" to be "Succeeded or Failed"
May 31 13:32:34.761: INFO: Pod "downwardapi-volume-76f11da8-d7d9-4f2f-b24a-2a64e7012048": Phase="Pending", Reason="", readiness=false. Elapsed: 4.359134ms
May 31 13:32:36.770: INFO: Pod "downwardapi-volume-76f11da8-d7d9-4f2f-b24a-2a64e7012048": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013302176s
May 31 13:32:38.784: INFO: Pod "downwardapi-volume-76f11da8-d7d9-4f2f-b24a-2a64e7012048": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027228348s
STEP: Saw pod success
May 31 13:32:38.784: INFO: Pod "downwardapi-volume-76f11da8-d7d9-4f2f-b24a-2a64e7012048" satisfied condition "Succeeded or Failed"
May 31 13:32:38.789: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-76f11da8-d7d9-4f2f-b24a-2a64e7012048 container client-container: <nil>
STEP: delete the pod
May 31 13:32:38.828: INFO: Waiting for pod downwardapi-volume-76f11da8-d7d9-4f2f-b24a-2a64e7012048 to disappear
May 31 13:32:38.833: INFO: Pod downwardapi-volume-76f11da8-d7d9-4f2f-b24a-2a64e7012048 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
May 31 13:32:38.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4664" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":292,"completed":254,"skipped":4197,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-5e409116-897b-40ad-904f-d56987a3ad60
STEP: Creating a pod to test consume secrets
May 31 13:32:38.912: INFO: Waiting up to 5m0s for pod "pod-secrets-8de99391-463a-4971-a078-6ff12281a1bc" in namespace "secrets-9201" to be "Succeeded or Failed"
May 31 13:32:38.916: INFO: Pod "pod-secrets-8de99391-463a-4971-a078-6ff12281a1bc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.308368ms
May 31 13:32:40.933: INFO: Pod "pod-secrets-8de99391-463a-4971-a078-6ff12281a1bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020567921s
May 31 13:32:42.942: INFO: Pod "pod-secrets-8de99391-463a-4971-a078-6ff12281a1bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030064321s
STEP: Saw pod success
May 31 13:32:42.943: INFO: Pod "pod-secrets-8de99391-463a-4971-a078-6ff12281a1bc" satisfied condition "Succeeded or Failed"
May 31 13:32:42.955: INFO: Trying to get logs from node kind-worker pod pod-secrets-8de99391-463a-4971-a078-6ff12281a1bc container secret-env-test: <nil>
STEP: delete the pod
May 31 13:32:43.000: INFO: Waiting for pod pod-secrets-8de99391-463a-4971-a078-6ff12281a1bc to disappear
May 31 13:32:43.007: INFO: Pod pod-secrets-8de99391-463a-4971-a078-6ff12281a1bc no longer exists
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
May 31 13:32:43.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9201" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":292,"completed":255,"skipped":4206,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 10 lines ...
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
May 31 13:32:47.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8301" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":292,"completed":256,"skipped":4220,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 40 lines ...
May 31 13:32:57.807: INFO: Deleting pod "simpletest-rc-to-be-deleted-fqk5w" in namespace "gc-9912"
May 31 13:32:57.910: INFO: Deleting pod "simpletest-rc-to-be-deleted-grnmp" in namespace "gc-9912"
[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
May 31 13:32:57.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9912" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":292,"completed":257,"skipped":4275,"failed":0}

------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
May 31 13:33:04.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2954" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":292,"completed":258,"skipped":4275,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 31 13:33:04.278: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
May 31 13:35:04.394: INFO: Deleting pod "var-expansion-1d58c0f8-51ef-4235-9db8-1aaaeeeea34a" in namespace "var-expansion-3533"
May 31 13:35:04.409: INFO: Wait up to 5m0s for pod "var-expansion-1d58c0f8-51ef-4235-9db8-1aaaeeeea34a" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
May 31 13:35:10.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3533" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":292,"completed":259,"skipped":4283,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 29 lines ...
May 31 13:35:14.799: INFO: Unable to read jessie_tcp@dns-test-service.dns-9611.svc from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:14.807: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9611.svc from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:14.820: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9611.svc from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:14.844: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-9611.svc from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:14.853: INFO: Unable to read jessie_udp@PodARecord from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:14.859: INFO: Unable to read jessie_tcp@PodARecord from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:14.873: INFO: Lookups using dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9611 wheezy_tcp@dns-test-service.dns-9611 wheezy_udp@dns-test-service.dns-9611.svc wheezy_tcp@dns-test-service.dns-9611.svc wheezy_udp@_http._tcp.dns-test-service.dns-9611.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9611.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9611 jessie_tcp@dns-test-service.dns-9611 jessie_udp@dns-test-service.dns-9611.svc jessie_tcp@dns-test-service.dns-9611.svc jessie_udp@_http._tcp.dns-test-service.dns-9611.svc jessie_tcp@_http._tcp.dns-test-service.dns-9611.svc jessie_tcp@_http._tcp.test-service-2.dns-9611.svc jessie_udp@PodARecord jessie_tcp@PodARecord]

May 31 13:35:19.880: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:19.885: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:19.892: INFO: Unable to read wheezy_udp@dns-test-service.dns-9611 from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:19.900: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9611 from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:19.904: INFO: Unable to read wheezy_udp@dns-test-service.dns-9611.svc from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
... skipping 5 lines ...
May 31 13:35:19.976: INFO: Unable to read jessie_udp@dns-test-service.dns-9611 from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:19.984: INFO: Unable to read jessie_tcp@dns-test-service.dns-9611 from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:19.994: INFO: Unable to read jessie_udp@dns-test-service.dns-9611.svc from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:19.998: INFO: Unable to read jessie_tcp@dns-test-service.dns-9611.svc from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:20.001: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9611.svc from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:20.005: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9611.svc from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:20.032: INFO: Lookups using dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9611 wheezy_tcp@dns-test-service.dns-9611 wheezy_udp@dns-test-service.dns-9611.svc wheezy_tcp@dns-test-service.dns-9611.svc wheezy_udp@_http._tcp.dns-test-service.dns-9611.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9611.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9611 jessie_tcp@dns-test-service.dns-9611 jessie_udp@dns-test-service.dns-9611.svc jessie_tcp@dns-test-service.dns-9611.svc jessie_udp@_http._tcp.dns-test-service.dns-9611.svc jessie_tcp@_http._tcp.dns-test-service.dns-9611.svc]

May 31 13:35:24.887: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:24.899: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:24.908: INFO: Unable to read wheezy_udp@dns-test-service.dns-9611 from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:24.919: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9611 from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:24.932: INFO: Unable to read wheezy_udp@dns-test-service.dns-9611.svc from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
... skipping 5 lines ...
May 31 13:35:24.992: INFO: Unable to read jessie_udp@dns-test-service.dns-9611 from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:24.997: INFO: Unable to read jessie_tcp@dns-test-service.dns-9611 from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:25.002: INFO: Unable to read jessie_udp@dns-test-service.dns-9611.svc from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:25.006: INFO: Unable to read jessie_tcp@dns-test-service.dns-9611.svc from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:25.011: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9611.svc from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:25.025: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9611.svc from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:25.053: INFO: Lookups using dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9611 wheezy_tcp@dns-test-service.dns-9611 wheezy_udp@dns-test-service.dns-9611.svc wheezy_tcp@dns-test-service.dns-9611.svc wheezy_udp@_http._tcp.dns-test-service.dns-9611.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9611.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9611 jessie_tcp@dns-test-service.dns-9611 jessie_udp@dns-test-service.dns-9611.svc jessie_tcp@dns-test-service.dns-9611.svc jessie_udp@_http._tcp.dns-test-service.dns-9611.svc jessie_tcp@_http._tcp.dns-test-service.dns-9611.svc]

May 31 13:35:29.882: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:29.886: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:29.890: INFO: Unable to read wheezy_udp@dns-test-service.dns-9611 from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:29.894: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9611 from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:29.900: INFO: Unable to read wheezy_udp@dns-test-service.dns-9611.svc from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
... skipping 5 lines ...
May 31 13:35:29.966: INFO: Unable to read jessie_udp@dns-test-service.dns-9611 from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:29.971: INFO: Unable to read jessie_tcp@dns-test-service.dns-9611 from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:29.976: INFO: Unable to read jessie_udp@dns-test-service.dns-9611.svc from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:29.983: INFO: Unable to read jessie_tcp@dns-test-service.dns-9611.svc from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:29.988: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9611.svc from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:29.995: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9611.svc from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:30.031: INFO: Lookups using dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9611 wheezy_tcp@dns-test-service.dns-9611 wheezy_udp@dns-test-service.dns-9611.svc wheezy_tcp@dns-test-service.dns-9611.svc wheezy_udp@_http._tcp.dns-test-service.dns-9611.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9611.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9611 jessie_tcp@dns-test-service.dns-9611 jessie_udp@dns-test-service.dns-9611.svc jessie_tcp@dns-test-service.dns-9611.svc jessie_udp@_http._tcp.dns-test-service.dns-9611.svc jessie_tcp@_http._tcp.dns-test-service.dns-9611.svc]

May 31 13:35:34.880: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:34.891: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:34.900: INFO: Unable to read wheezy_udp@dns-test-service.dns-9611 from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:34.912: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9611 from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:34.923: INFO: Unable to read wheezy_udp@dns-test-service.dns-9611.svc from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
... skipping 5 lines ...
May 31 13:35:34.990: INFO: Unable to read jessie_udp@dns-test-service.dns-9611 from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:34.994: INFO: Unable to read jessie_tcp@dns-test-service.dns-9611 from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:34.998: INFO: Unable to read jessie_udp@dns-test-service.dns-9611.svc from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:35.000: INFO: Unable to read jessie_tcp@dns-test-service.dns-9611.svc from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:35.004: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9611.svc from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:35.007: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9611.svc from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:35.036: INFO: Lookups using dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9611 wheezy_tcp@dns-test-service.dns-9611 wheezy_udp@dns-test-service.dns-9611.svc wheezy_tcp@dns-test-service.dns-9611.svc wheezy_udp@_http._tcp.dns-test-service.dns-9611.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9611.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9611 jessie_tcp@dns-test-service.dns-9611 jessie_udp@dns-test-service.dns-9611.svc jessie_tcp@dns-test-service.dns-9611.svc jessie_udp@_http._tcp.dns-test-service.dns-9611.svc jessie_tcp@_http._tcp.dns-test-service.dns-9611.svc]

May 31 13:35:39.879: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:39.890: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:39.907: INFO: Unable to read wheezy_udp@dns-test-service.dns-9611 from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:39.914: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9611 from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:39.922: INFO: Unable to read wheezy_udp@dns-test-service.dns-9611.svc from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
... skipping 5 lines ...
May 31 13:35:39.983: INFO: Unable to read jessie_udp@dns-test-service.dns-9611 from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:39.987: INFO: Unable to read jessie_tcp@dns-test-service.dns-9611 from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:39.990: INFO: Unable to read jessie_udp@dns-test-service.dns-9611.svc from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:39.994: INFO: Unable to read jessie_tcp@dns-test-service.dns-9611.svc from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:39.996: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9611.svc from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:40.002: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9611.svc from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:40.038: INFO: Lookups using dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9611 wheezy_tcp@dns-test-service.dns-9611 wheezy_udp@dns-test-service.dns-9611.svc wheezy_tcp@dns-test-service.dns-9611.svc wheezy_udp@_http._tcp.dns-test-service.dns-9611.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9611.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9611 jessie_tcp@dns-test-service.dns-9611 jessie_udp@dns-test-service.dns-9611.svc jessie_tcp@dns-test-service.dns-9611.svc jessie_udp@_http._tcp.dns-test-service.dns-9611.svc jessie_tcp@_http._tcp.dns-test-service.dns-9611.svc]

May 31 13:35:44.919: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9611.svc from pod dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e: the server could not find the requested resource (get pods dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e)
May 31 13:35:45.016: INFO: Lookups using dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e failed for: [wheezy_tcp@_http._tcp.dns-test-service.dns-9611.svc]

May 31 13:35:50.165: INFO: DNS probes using dns-9611/dns-test-fc9f0224-ee98-4ba7-a6ba-fe9716d9f20e succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
May 31 13:35:50.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9611" for this suite.
•{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":292,"completed":260,"skipped":4309,"failed":0}
SSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 42 lines ...
• [SLOW TEST:308.437 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":292,"completed":261,"skipped":4312,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 8 lines ...
May 31 13:40:58.926: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"5f44a3a9-b832-4c32-b0e9-678b3c9a60ad", Controller:(*bool)(0xc0003277a6), BlockOwnerDeletion:(*bool)(0xc0003277a7)}}
May 31 13:40:58.935: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"9441fe1d-33ea-4acd-a6a6-6c72df0fbf56", Controller:(*bool)(0xc0003fff3a), BlockOwnerDeletion:(*bool)(0xc0003fff3b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
May 31 13:41:03.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6082" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":292,"completed":262,"skipped":4328,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 9 lines ...
STEP: Creating the pod
May 31 13:41:08.608: INFO: Successfully updated pod "annotationupdate0c984eb3-41dd-42e3-908b-2d5ca0edc42c"
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
May 31 13:41:10.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6023" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":292,"completed":263,"skipped":4336,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 28 lines ...
May 31 13:41:34.974: INFO: >>> kubeConfig: /root/.kube/kind-test-config
May 31 13:41:35.210: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
May 31 13:41:35.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5526" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":292,"completed":264,"skipped":4371,"failed":0}
SSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 16 lines ...
  test/e2e/framework/framework.go:175
May 31 13:42:06.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-4045" for this suite.
STEP: Destroying namespace "nsdeletetest-8544" for this suite.
May 31 13:42:06.423: INFO: Namespace nsdeletetest-8544 was already deleted
STEP: Destroying namespace "nsdeletetest-9071" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":292,"completed":265,"skipped":4374,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should patch a secret [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
... skipping 10 lines ...
STEP: deleting the secret using a LabelSelector
STEP: listing secrets in all namespaces, searching for label name and value in patch
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
May 31 13:42:06.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9702" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":292,"completed":266,"skipped":4381,"failed":0}
SSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 19 lines ...
May 31 13:42:26.552: INFO: The status of Pod test-webserver-76f4ce85-95f9-448f-9cea-2bfda0d9130a is Running (Ready = true)
May 31 13:42:26.556: INFO: Container started at 2020-05-31 13:42:08 +0000 UTC, pod became ready at 2020-05-31 13:42:24 +0000 UTC
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
May 31 13:42:26.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-979" for this suite.
•{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":292,"completed":267,"skipped":4387,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
May 31 13:42:26.584: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir volume type on tmpfs
May 31 13:42:26.659: INFO: Waiting up to 5m0s for pod "pod-0e33661b-07e2-45f5-8531-a5b94e1f254b" in namespace "emptydir-6442" to be "Succeeded or Failed"
May 31 13:42:26.664: INFO: Pod "pod-0e33661b-07e2-45f5-8531-a5b94e1f254b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.56096ms
May 31 13:42:28.678: INFO: Pod "pod-0e33661b-07e2-45f5-8531-a5b94e1f254b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019583011s
May 31 13:42:30.694: INFO: Pod "pod-0e33661b-07e2-45f5-8531-a5b94e1f254b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035401074s
STEP: Saw pod success
May 31 13:42:30.694: INFO: Pod "pod-0e33661b-07e2-45f5-8531-a5b94e1f254b" satisfied condition "Succeeded or Failed"
May 31 13:42:30.707: INFO: Trying to get logs from node kind-worker2 pod pod-0e33661b-07e2-45f5-8531-a5b94e1f254b container test-container: <nil>
STEP: delete the pod
May 31 13:42:30.756: INFO: Waiting for pod pod-0e33661b-07e2-45f5-8531-a5b94e1f254b to disappear
May 31 13:42:30.765: INFO: Pod pod-0e33661b-07e2-45f5-8531-a5b94e1f254b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
May 31 13:42:30.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6442" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":268,"skipped":4389,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 25 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
May 31 13:42:55.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-618" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":292,"completed":269,"skipped":4413,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 33 lines ...
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
May 31 13:43:01.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1830" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":292,"completed":270,"skipped":4423,"failed":0}
SS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 11 lines ...
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
May 31 13:43:01.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-780" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":292,"completed":271,"skipped":4425,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
May 31 13:43:10.219: INFO: stderr: ""
May 31 13:43:10.219: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6722-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     <empty>\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
May 31 13:43:14.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1707" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":292,"completed":272,"skipped":4474,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Events 
  should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Events
... skipping 11 lines ...
STEP: deleting the test event
STEP: listing all events in all namespaces
[AfterEach] [sig-api-machinery] Events
  test/e2e/framework/framework.go:175
May 31 13:43:14.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-3358" for this suite.
•{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":292,"completed":273,"skipped":4488,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-16b7e417-f571-4b89-85b4-1bcede3f0d9f
STEP: Creating a pod to test consume configMaps
May 31 13:43:14.848: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e7c45b64-c8af-44d5-83be-e949054251ea" in namespace "projected-7437" to be "Succeeded or Failed"
May 31 13:43:14.854: INFO: Pod "pod-projected-configmaps-e7c45b64-c8af-44d5-83be-e949054251ea": Phase="Pending", Reason="", readiness=false. Elapsed: 6.137631ms
May 31 13:43:16.860: INFO: Pod "pod-projected-configmaps-e7c45b64-c8af-44d5-83be-e949054251ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011665491s
May 31 13:43:18.874: INFO: Pod "pod-projected-configmaps-e7c45b64-c8af-44d5-83be-e949054251ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026365062s
STEP: Saw pod success
May 31 13:43:18.875: INFO: Pod "pod-projected-configmaps-e7c45b64-c8af-44d5-83be-e949054251ea" satisfied condition "Succeeded or Failed"
May 31 13:43:18.884: INFO: Trying to get logs from node kind-worker2 pod pod-projected-configmaps-e7c45b64-c8af-44d5-83be-e949054251ea container projected-configmap-volume-test: <nil>
STEP: delete the pod
May 31 13:43:18.943: INFO: Waiting for pod pod-projected-configmaps-e7c45b64-c8af-44d5-83be-e949054251ea to disappear
May 31 13:43:18.955: INFO: Pod pod-projected-configmaps-e7c45b64-c8af-44d5-83be-e949054251ea no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
May 31 13:43:18.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7437" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":292,"completed":274,"skipped":4508,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 12 lines ...
STEP: Creating configMap with name cm-test-opt-create-4488ac20-0f89-44ee-9e6a-149261c1297e
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
May 31 13:44:47.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2365" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":275,"skipped":4524,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-0af8225c-84ce-440a-bb95-aa31a4c7d47f
STEP: Creating a pod to test consume secrets
May 31 13:44:48.053: INFO: Waiting up to 5m0s for pod "pod-secrets-4bc03992-8e4f-4d50-aaee-763035899a49" in namespace "secrets-2369" to be "Succeeded or Failed"
May 31 13:44:48.063: INFO: Pod "pod-secrets-4bc03992-8e4f-4d50-aaee-763035899a49": Phase="Pending", Reason="", readiness=false. Elapsed: 10.766362ms
May 31 13:44:50.079: INFO: Pod "pod-secrets-4bc03992-8e4f-4d50-aaee-763035899a49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026156024s
May 31 13:44:52.088: INFO: Pod "pod-secrets-4bc03992-8e4f-4d50-aaee-763035899a49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03566531s
STEP: Saw pod success
May 31 13:44:52.088: INFO: Pod "pod-secrets-4bc03992-8e4f-4d50-aaee-763035899a49" satisfied condition "Succeeded or Failed"
May 31 13:44:52.107: INFO: Trying to get logs from node kind-worker pod pod-secrets-4bc03992-8e4f-4d50-aaee-763035899a49 container secret-volume-test: <nil>
STEP: delete the pod
May 31 13:44:52.167: INFO: Waiting for pod pod-secrets-4bc03992-8e4f-4d50-aaee-763035899a49 to disappear
May 31 13:44:52.179: INFO: Pod pod-secrets-4bc03992-8e4f-4d50-aaee-763035899a49 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
May 31 13:44:52.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2369" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":292,"completed":276,"skipped":4541,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 19 lines ...
May 31 13:45:04.660: INFO: stderr: ""
May 31 13:45:04.661: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
May 31 13:45:04.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3067" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":292,"completed":277,"skipped":4578,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 65 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
May 31 13:45:10.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5410" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":292,"completed":278,"skipped":4593,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
May 31 13:45:10.171: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on node default medium
May 31 13:45:10.226: INFO: Waiting up to 5m0s for pod "pod-b8b8af23-b88a-4455-b8e9-92fb571d6364" in namespace "emptydir-4640" to be "Succeeded or Failed"
May 31 13:45:10.234: INFO: Pod "pod-b8b8af23-b88a-4455-b8e9-92fb571d6364": Phase="Pending", Reason="", readiness=false. Elapsed: 8.298755ms
May 31 13:45:12.240: INFO: Pod "pod-b8b8af23-b88a-4455-b8e9-92fb571d6364": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014194161s
May 31 13:45:14.253: INFO: Pod "pod-b8b8af23-b88a-4455-b8e9-92fb571d6364": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026903721s
STEP: Saw pod success
May 31 13:45:14.253: INFO: Pod "pod-b8b8af23-b88a-4455-b8e9-92fb571d6364" satisfied condition "Succeeded or Failed"
May 31 13:45:14.266: INFO: Trying to get logs from node kind-worker2 pod pod-b8b8af23-b88a-4455-b8e9-92fb571d6364 container test-container: <nil>
STEP: delete the pod
May 31 13:45:14.325: INFO: Waiting for pod pod-b8b8af23-b88a-4455-b8e9-92fb571d6364 to disappear
May 31 13:45:14.340: INFO: Pod pod-b8b8af23-b88a-4455-b8e9-92fb571d6364 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
May 31 13:45:14.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4640" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":279,"skipped":4595,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 31 13:45:14.535: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5a3ba359-328f-48ba-a6be-304e7330e9e4" in namespace "downward-api-5735" to be "Succeeded or Failed"
May 31 13:45:14.548: INFO: Pod "downwardapi-volume-5a3ba359-328f-48ba-a6be-304e7330e9e4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.228057ms
May 31 13:45:16.558: INFO: Pod "downwardapi-volume-5a3ba359-328f-48ba-a6be-304e7330e9e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022595319s
May 31 13:45:18.567: INFO: Pod "downwardapi-volume-5a3ba359-328f-48ba-a6be-304e7330e9e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031152828s
STEP: Saw pod success
May 31 13:45:18.567: INFO: Pod "downwardapi-volume-5a3ba359-328f-48ba-a6be-304e7330e9e4" satisfied condition "Succeeded or Failed"
May 31 13:45:18.575: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-5a3ba359-328f-48ba-a6be-304e7330e9e4 container client-container: <nil>
STEP: delete the pod
May 31 13:45:18.607: INFO: Waiting for pod downwardapi-volume-5a3ba359-328f-48ba-a6be-304e7330e9e4 to disappear
May 31 13:45:18.643: INFO: Pod downwardapi-volume-5a3ba359-328f-48ba-a6be-304e7330e9e4 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
May 31 13:45:18.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5735" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":292,"completed":280,"skipped":4622,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 26 lines ...
May 31 13:45:23.156: INFO: Pod "test-recreate-deployment-d5667d9c7-r6f5f" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-r6f5f test-recreate-deployment-d5667d9c7- deployment-578 /api/v1/namespaces/deployment-578/pods/test-recreate-deployment-d5667d9c7-r6f5f 5db97cdf-66d6-4a40-bf03-6b3c3f5d41d9 33520 0 2020-05-31 13:45:23 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 2f226048-72d8-410b-87bd-eb0d861daa70 0xc0032a8ea0 0xc0032a8ea1}] []  [{kube-controller-manager Update v1 2020-05-31 13:45:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2f226048-72d8-410b-87bd-eb0d861daa70\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-31 13:45:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zm6rm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zm6rm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zm6rm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 13:45:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 13:45:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 13:45:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 13:45:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2020-05-31 13:45:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
May 31 13:45:23.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-578" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":292,"completed":281,"skipped":4635,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 27 lines ...
  test/e2e/framework/framework.go:175
May 31 13:45:31.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8848" for this suite.
STEP: Destroying namespace "webhook-8848-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":292,"completed":282,"skipped":4657,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-bc6c4a20-bfa7-411c-90ac-9fcdfe110042
STEP: Creating a pod to test consume configMaps
May 31 13:45:31.573: INFO: Waiting up to 5m0s for pod "pod-configmaps-0025b469-a8c1-4d99-a81e-7669caaabede" in namespace "configmap-2641" to be "Succeeded or Failed"
May 31 13:45:31.578: INFO: Pod "pod-configmaps-0025b469-a8c1-4d99-a81e-7669caaabede": Phase="Pending", Reason="", readiness=false. Elapsed: 5.421625ms
May 31 13:45:33.591: INFO: Pod "pod-configmaps-0025b469-a8c1-4d99-a81e-7669caaabede": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018292563s
May 31 13:45:35.598: INFO: Pod "pod-configmaps-0025b469-a8c1-4d99-a81e-7669caaabede": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025680163s
STEP: Saw pod success
May 31 13:45:35.598: INFO: Pod "pod-configmaps-0025b469-a8c1-4d99-a81e-7669caaabede" satisfied condition "Succeeded or Failed"
May 31 13:45:35.602: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-0025b469-a8c1-4d99-a81e-7669caaabede container configmap-volume-test: <nil>
STEP: delete the pod
May 31 13:45:35.630: INFO: Waiting for pod pod-configmaps-0025b469-a8c1-4d99-a81e-7669caaabede to disappear
May 31 13:45:35.635: INFO: Pod pod-configmaps-0025b469-a8c1-4d99-a81e-7669caaabede no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
May 31 13:45:35.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2641" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":283,"skipped":4680,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl diff 
  should check if kubectl diff finds a difference for Deployments [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 17 lines ...
May 31 13:45:39.082: INFO: stderr: ""
May 31 13:45:39.082: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
May 31 13:45:39.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6021" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":292,"completed":284,"skipped":4684,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 31 13:45:39.115: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  test/e2e/apps/daemon_set.go:134
[It] should retry creating failed daemon pods [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
May 31 13:45:39.248: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
May 31 13:45:39.264: INFO: Number of nodes with available pods: 0
May 31 13:45:39.264: INFO: Node kind-worker is running more than one daemon pod
... skipping 6 lines ...
May 31 13:45:42.275: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
May 31 13:45:42.282: INFO: Number of nodes with available pods: 1
May 31 13:45:42.282: INFO: Node kind-worker is running more than one daemon pod
May 31 13:45:43.271: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
May 31 13:45:43.278: INFO: Number of nodes with available pods: 2
May 31 13:45:43.278: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
May 31 13:45:43.300: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
May 31 13:45:43.309: INFO: Number of nodes with available pods: 2
May 31 13:45:43.309: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1310, will wait for the garbage collector to delete the pods
May 31 13:45:44.408: INFO: Deleting DaemonSet.extensions daemon-set took: 14.965575ms
May 31 13:45:44.519: INFO: Terminating DaemonSet.extensions daemon-set pods took: 111.045752ms
{"component":"entrypoint","file":"prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","time":"2020-05-31T13:54:21Z"}
{"component":"entrypoint","file":"prow/entrypoint/run.go:245","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15s grace period","time":"2020-05-31T13:54:36Z"}