This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2020-06-01 12:30
Elapsed2h0m
Revisionmaster
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/09153d05-bfc8-440b-9f60-d53b02879dee/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/09153d05-bfc8-440b-9f60-d53b02879dee/targets/test

No Test Failures!


Error lines from build-log.txt

... skipping 70 lines ...
Analyzing: 4 targets (20 packages loaded, 27 targets configured)
Analyzing: 4 targets (321 packages loaded, 5632 targets configured)
Analyzing: 4 targets (1382 packages loaded, 11282 targets configured)
Analyzing: 4 targets (2268 packages loaded, 15447 targets configured)
Analyzing: 4 targets (2268 packages loaded, 15447 targets configured)
Analyzing: 4 targets (2269 packages loaded, 15447 targets configured)
DEBUG: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/bazel_gazelle/internal/go_repository.bzl:184:13: org_golang_x_tools: gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go:2:43: expected 'package', found 'EOF'
gazelle: found packages escapeinfo (escapeinfo.go) and issue31540 (issue31540.go) in /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/internal/gccgoimporter/testdata
gazelle: found packages a (a.go) and b (b.go) in /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/internal/gcimporter/testdata
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go:1:34: expected 'package', found 'EOF'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go:1:16: expected ';', found '.'
gazelle: finding module path for import domain.name/importdecl: exit status 1: can't load package: package domain.name/importdecl: cannot find module providing package domain.name/importdecl
gazelle: finding module path for import old.com/one: exit status 1: can't load package: package old.com/one: cannot find module providing package old.com/one
gazelle: finding module path for import titanic.biz/bar: exit status 1: can't load package: package titanic.biz/bar: cannot find module providing package titanic.biz/bar
gazelle: finding module path for import titanic.biz/foo: exit status 1: can't load package: package titanic.biz/foo: cannot find module providing package titanic.biz/foo
gazelle: finding module path for import fruit.io/pear: exit status 1: can't load package: package fruit.io/pear: cannot find module providing package fruit.io/pear
gazelle: finding module path for import fruit.io/banana: exit status 1: can't load package: package fruit.io/banana: cannot find module providing package fruit.io/banana
... skipping 157 lines ...
WARNING: Waiting for server process to terminate (waited 10 seconds, waiting at most 60)
WARNING: Waiting for server process to terminate (waited 30 seconds, waiting at most 60)
INFO: Waited 60 seconds for server process (pid=5712) to terminate.
WARNING: Waiting for server process to terminate (waited 5 seconds, waiting at most 10)
WARNING: Waiting for server process to terminate (waited 10 seconds, waiting at most 10)
INFO: Waited 10 seconds for server process (pid=5712) to terminate.
FATAL: Attempted to kill stale server process (pid=5712) using SIGKILL, but it did not die in a timely fashion.
+ true
+ pkill ^bazel
+ true
+ mkdir -p _output/bin/
+ cp bazel-bin/test/e2e/e2e.test _output/bin/
+ find /home/prow/go/src/k8s.io/kubernetes/bazel-bin/ -name kubectl -type f
... skipping 46 lines ...
localAPIEndpoint:
  advertiseAddress: 172.18.0.4
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.4
---
apiVersion: kubeadm.k8s.io/v1beta2
controlPlane:
  localAPIEndpoint:
    advertiseAddress: 172.18.0.4
... skipping 4 lines ...
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.4
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
... skipping 32 lines ...
localAPIEndpoint:
  advertiseAddress: 172.18.0.3
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.3
---
apiVersion: kubeadm.k8s.io/v1beta2
discovery:
  bootstrapToken:
    apiServerEndpoint: kind-control-plane:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.3
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
... skipping 32 lines ...
localAPIEndpoint:
  advertiseAddress: 172.18.0.2
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.2
---
apiVersion: kubeadm.k8s.io/v1beta2
discovery:
  bootstrapToken:
    apiServerEndpoint: kind-control-plane:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.2
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
... skipping 37 lines ...
I0601 12:40:22.595583     228 checks.go:376] validating the presence of executable ebtables
I0601 12:40:22.595652     228 checks.go:376] validating the presence of executable ethtool
I0601 12:40:22.595684     228 checks.go:376] validating the presence of executable socat
I0601 12:40:22.595734     228 checks.go:376] validating the presence of executable tc
I0601 12:40:22.595764     228 checks.go:376] validating the presence of executable touch
I0601 12:40:22.595806     228 checks.go:520] running all checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_PIDS: enabled
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0601 12:40:22.629838     228 checks.go:406] checking whether the given node name is reachable using net.LookupHost
I0601 12:40:22.699762     228 checks.go:618] validating kubelet version
I0601 12:40:22.951678     228 checks.go:128] validating if the "kubelet" service is enabled and active
I0601 12:40:22.983673     228 checks.go:201] validating availability of port 10250
I0601 12:40:22.983803     228 checks.go:201] validating availability of port 2379
I0601 12:40:22.983843     228 checks.go:201] validating availability of port 2380
... skipping 90 lines ...
I0601 12:40:39.731940     228 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 29 milliseconds
I0601 12:40:40.234663     228 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 31 milliseconds
I0601 12:40:40.720476     228 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 17 milliseconds
I0601 12:40:41.239975     228 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 35 milliseconds
I0601 12:40:41.737217     228 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 33 milliseconds
I0601 12:40:42.234416     228 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 32 milliseconds
I0601 12:40:52.197649     228 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 9495 milliseconds
I0601 12:40:52.248321     228 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 45 milliseconds
I0601 12:40:52.716601     228 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 14 milliseconds
I0601 12:40:53.204499     228 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 2 milliseconds
I0601 12:40:53.704776     228 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 2 milliseconds
I0601 12:40:54.204708     228 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 200 OK in 2 milliseconds
I0601 12:40:54.204851     228 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
[apiclient] All control plane components are healthy after 24.040029 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0601 12:40:54.211954     228 round_trippers.go:443] POST https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 5 milliseconds
I0601 12:40:54.224675     228 round_trippers.go:443] POST https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 11 milliseconds
... skipping 108 lines ...
I0601 12:41:10.148305     587 checks.go:376] validating the presence of executable ebtables
I0601 12:41:10.148366     587 checks.go:376] validating the presence of executable ethtool
I0601 12:41:10.148506     587 checks.go:376] validating the presence of executable socat
I0601 12:41:10.148589     587 checks.go:376] validating the presence of executable tc
I0601 12:41:10.148635     587 checks.go:376] validating the presence of executable touch
I0601 12:41:10.148791     587 checks.go:520] running all checks
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0601 12:41:10.172916     587 checks.go:406] checking whether the given node name is reachable using net.LookupHost
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
... skipping 80 lines ...
I0601 12:41:10.000649     591 checks.go:376] validating the presence of executable ebtables
I0601 12:41:10.000685     591 checks.go:376] validating the presence of executable ethtool
I0601 12:41:10.000709     591 checks.go:376] validating the presence of executable socat
I0601 12:41:10.000746     591 checks.go:376] validating the presence of executable tc
I0601 12:41:10.000768     591 checks.go:376] validating the presence of executable touch
I0601 12:41:10.000806     591 checks.go:520] running all checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_PIDS: enabled
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0601 12:41:10.032498     591 checks.go:406] checking whether the given node name is reachable using net.LookupHost
I0601 12:41:10.069752     591 checks.go:618] validating kubelet version
I0601 12:41:10.403805     591 checks.go:128] validating if the "kubelet" service is enabled and active
I0601 12:41:10.450090     591 checks.go:201] validating availability of port 10250
I0601 12:41:10.450347     591 checks.go:286] validating the existence of file /etc/kubernetes/pki/ca.crt
I0601 12:41:10.450384     591 checks.go:432] validating if the connectivity type is via proxy or direct
... skipping 71 lines ...
+ GINKGO_PID=11498
+ wait 11498
+ ./hack/ginkgo-e2e.sh --provider=skeleton --num-nodes=2 --ginkgo.focus=\[Conformance\] --ginkgo.skip= --report-dir=/logs/artifacts --disable-log-dump=true
Conformance test: not doing test setup.
I0601 12:41:50.994077   11738 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
I0601 12:41:50.994252   11738 e2e.go:129] Starting e2e run "86aa09c5-74d5-4244-a7a0-9e8f8584a5c9" on Ginkgo node 1
{"msg":"Test Suite starting","total":292,"completed":0,"skipped":0,"failed":0}
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1591015308 - Will randomize all specs
Will run 292 of 5101 specs

Jun  1 12:41:51.022: INFO: >>> kubeConfig: /root/.kube/kind-test-config
... skipping 22 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 12:41:51.213: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2a42572d-5143-489e-9f18-86816361c3b6" in namespace "downward-api-1408" to be "Succeeded or Failed"
Jun  1 12:41:51.217: INFO: Pod "downwardapi-volume-2a42572d-5143-489e-9f18-86816361c3b6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.706316ms
Jun  1 12:41:53.224: INFO: Pod "downwardapi-volume-2a42572d-5143-489e-9f18-86816361c3b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0116307s
Jun  1 12:41:55.232: INFO: Pod "downwardapi-volume-2a42572d-5143-489e-9f18-86816361c3b6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019662625s
Jun  1 12:41:57.240: INFO: Pod "downwardapi-volume-2a42572d-5143-489e-9f18-86816361c3b6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027074964s
Jun  1 12:41:59.248: INFO: Pod "downwardapi-volume-2a42572d-5143-489e-9f18-86816361c3b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.034800004s
STEP: Saw pod success
Jun  1 12:41:59.249: INFO: Pod "downwardapi-volume-2a42572d-5143-489e-9f18-86816361c3b6" satisfied condition "Succeeded or Failed"
Jun  1 12:41:59.258: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-2a42572d-5143-489e-9f18-86816361c3b6 container client-container: <nil>
STEP: delete the pod
Jun  1 12:41:59.326: INFO: Waiting for pod downwardapi-volume-2a42572d-5143-489e-9f18-86816361c3b6 to disappear
Jun  1 12:41:59.332: INFO: Pod downwardapi-volume-2a42572d-5143-489e-9f18-86816361c3b6 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Jun  1 12:41:59.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1408" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":292,"completed":1,"skipped":28,"failed":0}
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-a7dc1c3f-f439-4168-b042-759e878d11b9
STEP: Creating a pod to test consume secrets
Jun  1 12:41:59.426: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7dd9b8a3-dd1a-44c9-b726-86136c55b922" in namespace "projected-1265" to be "Succeeded or Failed"
Jun  1 12:41:59.434: INFO: Pod "pod-projected-secrets-7dd9b8a3-dd1a-44c9-b726-86136c55b922": Phase="Pending", Reason="", readiness=false. Elapsed: 8.342438ms
Jun  1 12:42:01.439: INFO: Pod "pod-projected-secrets-7dd9b8a3-dd1a-44c9-b726-86136c55b922": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013626953s
Jun  1 12:42:03.447: INFO: Pod "pod-projected-secrets-7dd9b8a3-dd1a-44c9-b726-86136c55b922": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021577385s
Jun  1 12:42:05.453: INFO: Pod "pod-projected-secrets-7dd9b8a3-dd1a-44c9-b726-86136c55b922": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027052256s
Jun  1 12:42:07.473: INFO: Pod "pod-projected-secrets-7dd9b8a3-dd1a-44c9-b726-86136c55b922": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047547895s
STEP: Saw pod success
Jun  1 12:42:07.473: INFO: Pod "pod-projected-secrets-7dd9b8a3-dd1a-44c9-b726-86136c55b922" satisfied condition "Succeeded or Failed"
Jun  1 12:42:07.480: INFO: Trying to get logs from node kind-worker2 pod pod-projected-secrets-7dd9b8a3-dd1a-44c9-b726-86136c55b922 container projected-secret-volume-test: <nil>
STEP: delete the pod
Jun  1 12:42:07.525: INFO: Waiting for pod pod-projected-secrets-7dd9b8a3-dd1a-44c9-b726-86136c55b922 to disappear
Jun  1 12:42:07.529: INFO: Pod pod-projected-secrets-7dd9b8a3-dd1a-44c9-b726-86136c55b922 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Jun  1 12:42:07.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1265" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":2,"skipped":29,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 66 lines ...
Jun  1 12:42:33.156: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2423/pods","resourceVersion":"910"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Jun  1 12:42:33.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2423" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":292,"completed":3,"skipped":50,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 26 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
Jun  1 12:42:34.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1449" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":292,"completed":4,"skipped":73,"failed":0}
SSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 26 lines ...
Jun  1 12:42:50.524: INFO: Unable to read jessie_udp@dns-test-service.dns-5083 from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
Jun  1 12:42:50.528: INFO: Unable to read jessie_tcp@dns-test-service.dns-5083 from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
Jun  1 12:42:50.534: INFO: Unable to read jessie_udp@dns-test-service.dns-5083.svc from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
Jun  1 12:42:50.538: INFO: Unable to read jessie_tcp@dns-test-service.dns-5083.svc from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
Jun  1 12:42:50.541: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5083.svc from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
Jun  1 12:42:50.545: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5083.svc from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
Jun  1 12:42:50.569: INFO: Lookups using dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5083 wheezy_tcp@dns-test-service.dns-5083 wheezy_udp@dns-test-service.dns-5083.svc wheezy_tcp@dns-test-service.dns-5083.svc wheezy_udp@_http._tcp.dns-test-service.dns-5083.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5083.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5083 jessie_tcp@dns-test-service.dns-5083 jessie_udp@dns-test-service.dns-5083.svc jessie_tcp@dns-test-service.dns-5083.svc jessie_udp@_http._tcp.dns-test-service.dns-5083.svc jessie_tcp@_http._tcp.dns-test-service.dns-5083.svc]

Jun  1 12:42:55.580: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
Jun  1 12:42:55.588: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
Jun  1 12:42:55.593: INFO: Unable to read wheezy_udp@dns-test-service.dns-5083 from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
Jun  1 12:42:55.597: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5083 from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
Jun  1 12:42:55.603: INFO: Unable to read wheezy_udp@dns-test-service.dns-5083.svc from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
... skipping 5 lines ...
Jun  1 12:42:55.668: INFO: Unable to read jessie_udp@dns-test-service.dns-5083 from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
Jun  1 12:42:55.672: INFO: Unable to read jessie_tcp@dns-test-service.dns-5083 from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
Jun  1 12:42:55.677: INFO: Unable to read jessie_udp@dns-test-service.dns-5083.svc from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
Jun  1 12:42:55.685: INFO: Unable to read jessie_tcp@dns-test-service.dns-5083.svc from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
Jun  1 12:42:55.701: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5083.svc from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
Jun  1 12:42:55.705: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5083.svc from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
Jun  1 12:42:55.771: INFO: Lookups using dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5083 wheezy_tcp@dns-test-service.dns-5083 wheezy_udp@dns-test-service.dns-5083.svc wheezy_tcp@dns-test-service.dns-5083.svc wheezy_udp@_http._tcp.dns-test-service.dns-5083.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5083.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5083 jessie_tcp@dns-test-service.dns-5083 jessie_udp@dns-test-service.dns-5083.svc jessie_tcp@dns-test-service.dns-5083.svc jessie_udp@_http._tcp.dns-test-service.dns-5083.svc jessie_tcp@_http._tcp.dns-test-service.dns-5083.svc]

Jun  1 12:43:00.579: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
Jun  1 12:43:00.589: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
Jun  1 12:43:00.594: INFO: Unable to read wheezy_udp@dns-test-service.dns-5083 from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
Jun  1 12:43:00.600: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5083 from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
Jun  1 12:43:00.616: INFO: Unable to read wheezy_udp@dns-test-service.dns-5083.svc from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
... skipping 5 lines ...
Jun  1 12:43:00.693: INFO: Unable to read jessie_udp@dns-test-service.dns-5083 from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
Jun  1 12:43:00.698: INFO: Unable to read jessie_tcp@dns-test-service.dns-5083 from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
Jun  1 12:43:00.704: INFO: Unable to read jessie_udp@dns-test-service.dns-5083.svc from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
Jun  1 12:43:00.708: INFO: Unable to read jessie_tcp@dns-test-service.dns-5083.svc from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
Jun  1 12:43:00.712: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5083.svc from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
Jun  1 12:43:00.717: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5083.svc from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
Jun  1 12:43:00.740: INFO: Lookups using dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5083 wheezy_tcp@dns-test-service.dns-5083 wheezy_udp@dns-test-service.dns-5083.svc wheezy_tcp@dns-test-service.dns-5083.svc wheezy_udp@_http._tcp.dns-test-service.dns-5083.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5083.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5083 jessie_tcp@dns-test-service.dns-5083 jessie_udp@dns-test-service.dns-5083.svc jessie_tcp@dns-test-service.dns-5083.svc jessie_udp@_http._tcp.dns-test-service.dns-5083.svc jessie_tcp@_http._tcp.dns-test-service.dns-5083.svc]

Jun  1 12:43:05.577: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
Jun  1 12:43:05.588: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
Jun  1 12:43:05.593: INFO: Unable to read wheezy_udp@dns-test-service.dns-5083 from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
Jun  1 12:43:05.597: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5083 from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
Jun  1 12:43:05.604: INFO: Unable to read wheezy_udp@dns-test-service.dns-5083.svc from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
... skipping 5 lines ...
Jun  1 12:43:05.671: INFO: Unable to read jessie_udp@dns-test-service.dns-5083 from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
Jun  1 12:43:05.679: INFO: Unable to read jessie_tcp@dns-test-service.dns-5083 from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
Jun  1 12:43:05.684: INFO: Unable to read jessie_udp@dns-test-service.dns-5083.svc from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
Jun  1 12:43:05.688: INFO: Unable to read jessie_tcp@dns-test-service.dns-5083.svc from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
Jun  1 12:43:05.696: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5083.svc from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
Jun  1 12:43:05.715: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5083.svc from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
Jun  1 12:43:05.752: INFO: Lookups using dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5083 wheezy_tcp@dns-test-service.dns-5083 wheezy_udp@dns-test-service.dns-5083.svc wheezy_tcp@dns-test-service.dns-5083.svc wheezy_udp@_http._tcp.dns-test-service.dns-5083.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5083.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5083 jessie_tcp@dns-test-service.dns-5083 jessie_udp@dns-test-service.dns-5083.svc jessie_tcp@dns-test-service.dns-5083.svc jessie_udp@_http._tcp.dns-test-service.dns-5083.svc jessie_tcp@_http._tcp.dns-test-service.dns-5083.svc]

Jun  1 12:43:10.726: INFO: Unable to read jessie_udp@dns-test-service.dns-5083.svc from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
Jun  1 12:43:10.741: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5083.svc from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
Jun  1 12:43:10.749: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5083.svc from pod dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b: the server could not find the requested resource (get pods dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b)
Jun  1 12:43:10.787: INFO: Lookups using dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b failed for: [jessie_udp@dns-test-service.dns-5083.svc jessie_udp@_http._tcp.dns-test-service.dns-5083.svc jessie_tcp@_http._tcp.dns-test-service.dns-5083.svc]

Jun  1 12:43:15.710: INFO: DNS probes using dns-5083/dns-test-6e8bb70d-c9b4-41e3-86d4-13e2198b7d9b succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Jun  1 12:43:15.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5083" for this suite.
•{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":292,"completed":5,"skipped":77,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 20 lines ...
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Jun  1 12:43:42.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5993" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":292,"completed":6,"skipped":105,"failed":0}
SSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 28 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 12:43:52.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-584" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":292,"completed":7,"skipped":110,"failed":0}
SSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 24 lines ...
Jun  1 12:43:53.671: INFO: created pod pod-service-account-nomountsa-nomountspec
Jun  1 12:43:53.671: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  test/e2e/framework/framework.go:175
Jun  1 12:43:53.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-8505" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":292,"completed":8,"skipped":117,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-c86a3dc9-c399-4d55-88f7-ab009435ebae
STEP: Creating a pod to test consume secrets
Jun  1 12:43:53.756: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-29b63f77-1458-4bcf-b23d-ee5091efefd7" in namespace "projected-1888" to be "Succeeded or Failed"
Jun  1 12:43:53.759: INFO: Pod "pod-projected-secrets-29b63f77-1458-4bcf-b23d-ee5091efefd7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.507769ms
Jun  1 12:43:55.764: INFO: Pod "pod-projected-secrets-29b63f77-1458-4bcf-b23d-ee5091efefd7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007884888s
Jun  1 12:43:57.772: INFO: Pod "pod-projected-secrets-29b63f77-1458-4bcf-b23d-ee5091efefd7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01604347s
Jun  1 12:43:59.778: INFO: Pod "pod-projected-secrets-29b63f77-1458-4bcf-b23d-ee5091efefd7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022045262s
STEP: Saw pod success
Jun  1 12:43:59.778: INFO: Pod "pod-projected-secrets-29b63f77-1458-4bcf-b23d-ee5091efefd7" satisfied condition "Succeeded or Failed"
Jun  1 12:43:59.792: INFO: Trying to get logs from node kind-worker2 pod pod-projected-secrets-29b63f77-1458-4bcf-b23d-ee5091efefd7 container projected-secret-volume-test: <nil>
STEP: delete the pod
Jun  1 12:43:59.852: INFO: Waiting for pod pod-projected-secrets-29b63f77-1458-4bcf-b23d-ee5091efefd7 to disappear
Jun  1 12:43:59.857: INFO: Pod pod-projected-secrets-29b63f77-1458-4bcf-b23d-ee5091efefd7 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Jun  1 12:43:59.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1888" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":9,"skipped":168,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 12 lines ...
Jun  1 12:44:04.033: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jun  1 12:44:04.248: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 12:44:04.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9604" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":292,"completed":10,"skipped":192,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-projected-2k82
STEP: Creating a pod to test atomic-volume-subpath
Jun  1 12:44:04.320: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-2k82" in namespace "subpath-4281" to be "Succeeded or Failed"
Jun  1 12:44:04.324: INFO: Pod "pod-subpath-test-projected-2k82": Phase="Pending", Reason="", readiness=false. Elapsed: 4.259006ms
Jun  1 12:44:06.360: INFO: Pod "pod-subpath-test-projected-2k82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039734372s
Jun  1 12:44:08.397: INFO: Pod "pod-subpath-test-projected-2k82": Phase="Running", Reason="", readiness=true. Elapsed: 4.076954154s
Jun  1 12:44:10.417: INFO: Pod "pod-subpath-test-projected-2k82": Phase="Running", Reason="", readiness=true. Elapsed: 6.097390967s
Jun  1 12:44:12.435: INFO: Pod "pod-subpath-test-projected-2k82": Phase="Running", Reason="", readiness=true. Elapsed: 8.115305292s
Jun  1 12:44:14.441: INFO: Pod "pod-subpath-test-projected-2k82": Phase="Running", Reason="", readiness=true. Elapsed: 10.121164445s
... skipping 2 lines ...
Jun  1 12:44:20.462: INFO: Pod "pod-subpath-test-projected-2k82": Phase="Running", Reason="", readiness=true. Elapsed: 16.14159494s
Jun  1 12:44:22.472: INFO: Pod "pod-subpath-test-projected-2k82": Phase="Running", Reason="", readiness=true. Elapsed: 18.151758451s
Jun  1 12:44:24.490: INFO: Pod "pod-subpath-test-projected-2k82": Phase="Running", Reason="", readiness=true. Elapsed: 20.169885959s
Jun  1 12:44:26.495: INFO: Pod "pod-subpath-test-projected-2k82": Phase="Running", Reason="", readiness=true. Elapsed: 22.174835717s
Jun  1 12:44:28.501: INFO: Pod "pod-subpath-test-projected-2k82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.180976643s
STEP: Saw pod success
Jun  1 12:44:28.501: INFO: Pod "pod-subpath-test-projected-2k82" satisfied condition "Succeeded or Failed"
Jun  1 12:44:28.506: INFO: Trying to get logs from node kind-worker2 pod pod-subpath-test-projected-2k82 container test-container-subpath-projected-2k82: <nil>
STEP: delete the pod
Jun  1 12:44:28.533: INFO: Waiting for pod pod-subpath-test-projected-2k82 to disappear
Jun  1 12:44:28.537: INFO: Pod pod-subpath-test-projected-2k82 no longer exists
STEP: Deleting pod pod-subpath-test-projected-2k82
Jun  1 12:44:28.537: INFO: Deleting pod "pod-subpath-test-projected-2k82" in namespace "subpath-4281"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Jun  1 12:44:28.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4281" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":292,"completed":11,"skipped":206,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 21 lines ...
Jun  1 12:44:44.672: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
Jun  1 12:44:44.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7844" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":292,"completed":12,"skipped":213,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap configmap-2417/configmap-test-40fb078e-9397-4b78-9a4f-c61117c6316a
STEP: Creating a pod to test consume configMaps
Jun  1 12:44:44.791: INFO: Waiting up to 5m0s for pod "pod-configmaps-637f910a-6f18-4a1e-929e-30e4c1fdf21d" in namespace "configmap-2417" to be "Succeeded or Failed"
Jun  1 12:44:44.797: INFO: Pod "pod-configmaps-637f910a-6f18-4a1e-929e-30e4c1fdf21d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.559341ms
Jun  1 12:44:46.801: INFO: Pod "pod-configmaps-637f910a-6f18-4a1e-929e-30e4c1fdf21d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009495154s
Jun  1 12:44:48.811: INFO: Pod "pod-configmaps-637f910a-6f18-4a1e-929e-30e4c1fdf21d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019906567s
STEP: Saw pod success
Jun  1 12:44:48.811: INFO: Pod "pod-configmaps-637f910a-6f18-4a1e-929e-30e4c1fdf21d" satisfied condition "Succeeded or Failed"
Jun  1 12:44:48.821: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-637f910a-6f18-4a1e-929e-30e4c1fdf21d container env-test: <nil>
STEP: delete the pod
Jun  1 12:44:48.854: INFO: Waiting for pod pod-configmaps-637f910a-6f18-4a1e-929e-30e4c1fdf21d to disappear
Jun  1 12:44:48.857: INFO: Pod pod-configmaps-637f910a-6f18-4a1e-929e-30e4c1fdf21d no longer exists
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 12:44:48.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2417" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":292,"completed":13,"skipped":233,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Jun  1 12:44:48.868: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Jun  1 12:44:48.909: INFO: Waiting up to 5m0s for pod "downward-api-c09bafe4-2c0d-4fba-84c4-1f009f8da39a" in namespace "downward-api-6797" to be "Succeeded or Failed"
Jun  1 12:44:48.912: INFO: Pod "downward-api-c09bafe4-2c0d-4fba-84c4-1f009f8da39a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.391221ms
Jun  1 12:44:50.917: INFO: Pod "downward-api-c09bafe4-2c0d-4fba-84c4-1f009f8da39a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008028936s
Jun  1 12:44:52.927: INFO: Pod "downward-api-c09bafe4-2c0d-4fba-84c4-1f009f8da39a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018140797s
STEP: Saw pod success
Jun  1 12:44:52.927: INFO: Pod "downward-api-c09bafe4-2c0d-4fba-84c4-1f009f8da39a" satisfied condition "Succeeded or Failed"
Jun  1 12:44:52.935: INFO: Trying to get logs from node kind-worker pod downward-api-c09bafe4-2c0d-4fba-84c4-1f009f8da39a container dapi-container: <nil>
STEP: delete the pod
Jun  1 12:44:52.960: INFO: Waiting for pod downward-api-c09bafe4-2c0d-4fba-84c4-1f009f8da39a to disappear
Jun  1 12:44:52.963: INFO: Pod downward-api-c09bafe4-2c0d-4fba-84c4-1f009f8da39a no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Jun  1 12:44:52.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6797" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":292,"completed":14,"skipped":269,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 12 lines ...
Jun  1 12:44:58.031: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
Jun  1 12:44:59.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2655" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":292,"completed":15,"skipped":299,"failed":0}
S
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 12:44:59.178: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ab9a0bf0-154f-4454-8944-6649757feff8" in namespace "downward-api-9695" to be "Succeeded or Failed"
Jun  1 12:44:59.187: INFO: Pod "downwardapi-volume-ab9a0bf0-154f-4454-8944-6649757feff8": Phase="Pending", Reason="", readiness=false. Elapsed: 9.14229ms
Jun  1 12:45:01.209: INFO: Pod "downwardapi-volume-ab9a0bf0-154f-4454-8944-6649757feff8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03124659s
Jun  1 12:45:03.216: INFO: Pod "downwardapi-volume-ab9a0bf0-154f-4454-8944-6649757feff8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03795019s
STEP: Saw pod success
Jun  1 12:45:03.216: INFO: Pod "downwardapi-volume-ab9a0bf0-154f-4454-8944-6649757feff8" satisfied condition "Succeeded or Failed"
Jun  1 12:45:03.220: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-ab9a0bf0-154f-4454-8944-6649757feff8 container client-container: <nil>
STEP: delete the pod
Jun  1 12:45:03.248: INFO: Waiting for pod downwardapi-volume-ab9a0bf0-154f-4454-8944-6649757feff8 to disappear
Jun  1 12:45:03.251: INFO: Pod downwardapi-volume-ab9a0bf0-154f-4454-8944-6649757feff8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Jun  1 12:45:03.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9695" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":292,"completed":16,"skipped":300,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 52 lines ...
Jun  1 12:45:23.140: INFO: stderr: ""
Jun  1 12:45:23.140: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 12:45:23.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1851" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":292,"completed":17,"skipped":309,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 8 lines ...
Jun  1 12:45:23.190: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jun  1 12:45:27.088: INFO: >>> kubeConfig: /root/.kube/kind-test-config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 12:45:41.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9914" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":292,"completed":18,"skipped":342,"failed":0}
SSSS
------------------------------
[sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath 
  runs ReplicaSets to verify preemption running path [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 32 lines ...
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  test/e2e/framework/framework.go:175
Jun  1 12:47:25.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-preemption-4315" for this suite.
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  test/e2e/scheduling/preemption.go:75
•{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":292,"completed":19,"skipped":346,"failed":0}
S
------------------------------
[k8s.io] Variable Expansion 
  should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  1 12:47:25.148: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
Jun  1 12:49:25.209: INFO: Deleting pod "var-expansion-bed8081a-6800-4248-975a-a168a2ebf422" in namespace "var-expansion-7165"
Jun  1 12:49:25.221: INFO: Wait up to 5m0s for pod "var-expansion-bed8081a-6800-4248-975a-a168a2ebf422" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Jun  1 12:49:31.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7165" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":292,"completed":20,"skipped":347,"failed":0}
SSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
... skipping 42 lines ...
Jun  1 12:49:41.262: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jun  1 12:49:41.421: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  test/e2e/framework/framework.go:175
Jun  1 12:49:41.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-1666" for this suite.
•{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":21,"skipped":350,"failed":0}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 12:49:41.492: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e6fc0ebd-985e-4ecb-a1ff-8e1d52250f03" in namespace "projected-7343" to be "Succeeded or Failed"
Jun  1 12:49:41.497: INFO: Pod "downwardapi-volume-e6fc0ebd-985e-4ecb-a1ff-8e1d52250f03": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119413ms
Jun  1 12:49:43.504: INFO: Pod "downwardapi-volume-e6fc0ebd-985e-4ecb-a1ff-8e1d52250f03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011086956s
Jun  1 12:49:45.509: INFO: Pod "downwardapi-volume-e6fc0ebd-985e-4ecb-a1ff-8e1d52250f03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016788743s
STEP: Saw pod success
Jun  1 12:49:45.509: INFO: Pod "downwardapi-volume-e6fc0ebd-985e-4ecb-a1ff-8e1d52250f03" satisfied condition "Succeeded or Failed"
Jun  1 12:49:45.516: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-e6fc0ebd-985e-4ecb-a1ff-8e1d52250f03 container client-container: <nil>
STEP: delete the pod
Jun  1 12:49:45.557: INFO: Waiting for pod downwardapi-volume-e6fc0ebd-985e-4ecb-a1ff-8e1d52250f03 to disappear
Jun  1 12:49:45.564: INFO: Pod downwardapi-volume-e6fc0ebd-985e-4ecb-a1ff-8e1d52250f03 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Jun  1 12:49:45.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7343" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":292,"completed":22,"skipped":353,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 22 lines ...
Jun  1 12:49:49.303: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jun  1 12:49:49.303: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:41191 --kubeconfig=/root/.kube/kind-test-config describe pod agnhost-master-spwl9 --namespace=kubectl-3511'
Jun  1 12:49:49.681: INFO: stderr: ""
Jun  1 12:49:49.681: INFO: stdout: "Name:         agnhost-master-spwl9\nNamespace:    kubectl-3511\nPriority:     0\nNode:         kind-worker/172.18.0.2\nStart Time:   Mon, 01 Jun 2020 12:49:46 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  <none>\nStatus:       Running\nIP:           10.244.1.23\nIPs:\n  IP:           10.244.1.23\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://dc3d5b3aab993a9576c397094e9c316c7c8093d2ce5dd4711d4f48b9e9770488\n    Image:          us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n    Image ID:       us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Mon, 01 Jun 2020 12:49:48 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-swbrx (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-swbrx:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-swbrx\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  <none>\nTolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From                  Message\n  ----    ------     ----  ----                  -------\n  Normal  Scheduled  3s    default-scheduler     Successfully assigned kubectl-3511/agnhost-master-spwl9 to kind-worker\n  Normal  Pulled     1s    kubelet, kind-worker  Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\" already present on machine\n  Normal  Created    1s    kubelet, kind-worker  Created container agnhost-master\n  Normal  Started    1s    kubelet, kind-worker  Started container agnhost-master\n"
Jun  1 12:49:49.681: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:41191 --kubeconfig=/root/.kube/kind-test-config describe rc agnhost-master --namespace=kubectl-3511'
Jun  1 12:49:50.074: INFO: stderr: ""
Jun  1 12:49:50.074: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-3511\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  4s    replication-controller  Created pod: agnhost-master-spwl9\n"
Jun  1 12:49:50.074: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:41191 --kubeconfig=/root/.kube/kind-test-config describe service agnhost-master --namespace=kubectl-3511'
Jun  1 12:49:50.449: INFO: stderr: ""
Jun  1 12:49:50.449: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-3511\nLabels:            app=agnhost\n                   role=master\nAnnotations:       <none>\nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.111.2.125\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.244.1.23:6379\nSession Affinity:  None\nEvents:            <none>\n"
Jun  1 12:49:50.458: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:41191 --kubeconfig=/root/.kube/kind-test-config describe node kind-control-plane'
Jun  1 12:49:50.875: INFO: stderr: ""
Jun  1 12:49:50.876: INFO: stdout: "Name:               kind-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=kind-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Mon, 01 Jun 2020 12:40:52 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  kind-control-plane\n  AcquireTime:     <unset>\n  RenewTime:       Mon, 01 Jun 2020 12:49:42 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Mon, 01 Jun 2020 12:46:32 +0000   Mon, 01 Jun 2020 12:40:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Mon, 01 Jun 2020 12:46:32 +0000   Mon, 01 Jun 2020 12:40:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Mon, 01 Jun 2020 12:46:32 +0000   Mon, 01 Jun 2020 12:40:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Mon, 01 Jun 2020 12:46:32 +0000   Mon, 01 Jun 2020 12:41:32 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.4\n  Hostname:    kind-control-plane\nCapacity:\n  cpu:                8\n  ephemeral-storage:  253882800Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             53582972Ki\n  pods:               110\nAllocatable:\n  cpu:                8\n  ephemeral-storage:  253882800Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             53582972Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 cf8dc90f351447768ab1e1e676ce880a\n  System UUID:                77dda977-5eba-4ab3-827b-14c8b38fac35\n  Boot ID:                    ea08b850-a99e-453a-8958-7bd48109b501\n  Kernel Version:             4.15.0-1044-gke\n  OS Image:                   Ubuntu 20.04 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.3.4-12-g1e902b2d\n  Kubelet Version:            v1.19.0-beta.0.318+b618411f1edb98\n  Kube-Proxy Version:         v1.19.0-beta.0.318+b618411f1edb98\nPodCIDR:                      10.244.0.0/24\nPodCIDRs:                     10.244.0.0/24\nNon-terminated Pods:          (9 in total)\n  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---\n  kube-system                 coredns-66bff467f8-5tfpl                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     8m39s\n  kube-system                 coredns-66bff467f8-9s59b                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     8m39s\n  kube-system                 etcd-kind-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m48s\n  kube-system                 kindnet-7tmgp                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      8m39s\n  kube-system                 kube-apiserver-kind-control-plane             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8m48s\n  kube-system                 kube-controller-manager-kind-control-plane    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m48s\n  kube-system                 kube-proxy-xdscb                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m39s\n  kube-system                 kube-scheduler-kind-control-plane             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m48s\n  local-path-storage          local-path-provisioner-bd4bb6b75-h6kfc        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m39s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (10%)  100m (1%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\n  hugepages-1Gi      0 (0%)      0 (0%)\n  hugepages-2Mi      0 (0%)      0 (0%)\nEvents:\n  Type     Reason                    Age                    From                            Message\n  ----     ------                    ----                   ----                            -------\n  Normal   NodeHasSufficientMemory   9m13s (x6 over 9m14s)  kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasSufficientMemory\n  Normal   NodeHasNoDiskPressure     9m13s (x5 over 9m14s)  kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID      9m13s (x5 over 9m14s)  kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasSufficientPID\n  Normal   Starting                  8m49s                  kubelet, kind-control-plane     Starting kubelet.\n  Normal   NodeHasSufficientMemory   8m49s                  kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasSufficientMemory\n  Normal   NodeHasNoDiskPressure     8m49s                  kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID      8m49s                  kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasSufficientPID\n  Warning  CheckLimitsForResolvConf  8m48s                  kubelet, kind-control-plane     Resolv.conf file '/etc/resolv.conf' contains search line consisting of more than 3 domains!\n  Normal   NodeAllocatableEnforced   8m48s                  kubelet, kind-control-plane     Updated Node Allocatable limit across pods\n  Normal   Starting                  8m34s                  kube-proxy, kind-control-plane  Starting kube-proxy.\n  Normal   NodeReady                 8m18s                  kubelet, kind-control-plane     Node kind-control-plane status is now: NodeReady\n"
Jun  1 12:49:50.876: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:41191 --kubeconfig=/root/.kube/kind-test-config describe namespace kubectl-3511'
Jun  1 12:49:51.154: INFO: stderr: ""
Jun  1 12:49:51.154: INFO: stdout: "Name:         kubectl-3511\nLabels:       e2e-framework=kubectl\n              e2e-run=86aa09c5-74d5-4244-a7a0-9e8f8584a5c9\nAnnotations:  <none>\nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 12:49:51.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3511" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":292,"completed":23,"skipped":357,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 27 lines ...
Jun  1 12:50:13.333: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jun  1 12:50:13.337: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
Jun  1 12:50:13.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-779" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":292,"completed":24,"skipped":365,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 15 lines ...
  test/e2e/framework/framework.go:175
Jun  1 12:50:19.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-2258" for this suite.
STEP: Destroying namespace "nsdeletetest-6230" for this suite.
Jun  1 12:50:19.529: INFO: Namespace nsdeletetest-6230 was already deleted
STEP: Destroying namespace "nsdeletetest-6693" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":292,"completed":25,"skipped":372,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 8 lines ...
Jun  1 12:50:19.628: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"f65648b5-8022-41e7-a8ef-78f9b240f77f", Controller:(*bool)(0xc001d32106), BlockOwnerDeletion:(*bool)(0xc001d32107)}}
Jun  1 12:50:19.634: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"bb9e61d3-c403-4d40-a1b8-dac97addeb8c", Controller:(*bool)(0xc0025e90c6), BlockOwnerDeletion:(*bool)(0xc0025e90c7)}}
[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Jun  1 12:50:24.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5481" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":292,"completed":26,"skipped":383,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  test/e2e/framework/framework.go:175
Jun  1 12:50:30.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6192" for this suite.
STEP: Destroying namespace "webhook-6192-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":292,"completed":27,"skipped":393,"failed":0}

------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 12 lines ...
STEP: reading a file in the container
Jun  1 12:50:38.532: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl exec --namespace=svcaccounts-9644 pod-service-account-2b7518e4-5717-4325-ba4d-bb562260966c -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  test/e2e/framework/framework.go:175
Jun  1 12:50:39.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-9644" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":292,"completed":28,"skipped":393,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
  test/e2e/framework/framework.go:175
Jun  1 12:50:44.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7545" for this suite.
STEP: Destroying namespace "webhook-7545-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":292,"completed":29,"skipped":421,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 12:50:44.788: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on node default medium
Jun  1 12:50:44.861: INFO: Waiting up to 5m0s for pod "pod-27c5a1f7-c124-49a0-ba6b-e0deb74d78cc" in namespace "emptydir-4060" to be "Succeeded or Failed"
Jun  1 12:50:44.874: INFO: Pod "pod-27c5a1f7-c124-49a0-ba6b-e0deb74d78cc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.776438ms
Jun  1 12:50:46.881: INFO: Pod "pod-27c5a1f7-c124-49a0-ba6b-e0deb74d78cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020050524s
Jun  1 12:50:48.887: INFO: Pod "pod-27c5a1f7-c124-49a0-ba6b-e0deb74d78cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026052341s
STEP: Saw pod success
Jun  1 12:50:48.887: INFO: Pod "pod-27c5a1f7-c124-49a0-ba6b-e0deb74d78cc" satisfied condition "Succeeded or Failed"
Jun  1 12:50:48.893: INFO: Trying to get logs from node kind-worker2 pod pod-27c5a1f7-c124-49a0-ba6b-e0deb74d78cc container test-container: <nil>
STEP: delete the pod
Jun  1 12:50:48.912: INFO: Waiting for pod pod-27c5a1f7-c124-49a0-ba6b-e0deb74d78cc to disappear
Jun  1 12:50:48.917: INFO: Pod pod-27c5a1f7-c124-49a0-ba6b-e0deb74d78cc no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 12:50:48.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4060" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":30,"skipped":422,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 17 lines ...
Jun  1 12:51:01.049: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jun  1 12:51:01.056: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
Jun  1 12:51:01.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-3758" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":292,"completed":31,"skipped":435,"failed":0}
SS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Jun  1 12:51:01.069: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Jun  1 12:51:01.116: INFO: Waiting up to 5m0s for pod "downward-api-b9174f98-6e50-4325-a4fc-acf2d041ba73" in namespace "downward-api-6421" to be "Succeeded or Failed"
Jun  1 12:51:01.120: INFO: Pod "downward-api-b9174f98-6e50-4325-a4fc-acf2d041ba73": Phase="Pending", Reason="", readiness=false. Elapsed: 3.736428ms
Jun  1 12:51:03.124: INFO: Pod "downward-api-b9174f98-6e50-4325-a4fc-acf2d041ba73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007816384s
Jun  1 12:51:05.129: INFO: Pod "downward-api-b9174f98-6e50-4325-a4fc-acf2d041ba73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013377745s
STEP: Saw pod success
Jun  1 12:51:05.129: INFO: Pod "downward-api-b9174f98-6e50-4325-a4fc-acf2d041ba73" satisfied condition "Succeeded or Failed"
Jun  1 12:51:05.134: INFO: Trying to get logs from node kind-worker pod downward-api-b9174f98-6e50-4325-a4fc-acf2d041ba73 container dapi-container: <nil>
STEP: delete the pod
Jun  1 12:51:05.157: INFO: Waiting for pod downward-api-b9174f98-6e50-4325-a4fc-acf2d041ba73 to disappear
Jun  1 12:51:05.160: INFO: Pod downward-api-b9174f98-6e50-4325-a4fc-acf2d041ba73 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Jun  1 12:51:05.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6421" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":292,"completed":32,"skipped":437,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  test/e2e/framework/framework.go:175
Jun  1 12:51:11.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2000" for this suite.
STEP: Destroying namespace "webhook-2000-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":292,"completed":33,"skipped":503,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 7 lines ...
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Jun  1 12:51:16.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4194" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":292,"completed":34,"skipped":515,"failed":0}

------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
Jun  1 12:51:21.036: INFO: Initial restart count of pod busybox-ce30c8e7-c16c-411b-94f7-d66a419807f8 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Jun  1 12:55:21.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9592" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":292,"completed":35,"skipped":515,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-531580f3-d3eb-4956-b277-59ceb3491bdf
STEP: Creating a pod to test consume secrets
Jun  1 12:55:21.940: INFO: Waiting up to 5m0s for pod "pod-secrets-c6e273ef-31c0-4c8a-a3c7-6bb821f7f162" in namespace "secrets-7419" to be "Succeeded or Failed"
Jun  1 12:55:21.944: INFO: Pod "pod-secrets-c6e273ef-31c0-4c8a-a3c7-6bb821f7f162": Phase="Pending", Reason="", readiness=false. Elapsed: 3.392441ms
Jun  1 12:55:23.950: INFO: Pod "pod-secrets-c6e273ef-31c0-4c8a-a3c7-6bb821f7f162": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009398363s
Jun  1 12:55:25.960: INFO: Pod "pod-secrets-c6e273ef-31c0-4c8a-a3c7-6bb821f7f162": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019668023s
STEP: Saw pod success
Jun  1 12:55:25.960: INFO: Pod "pod-secrets-c6e273ef-31c0-4c8a-a3c7-6bb821f7f162" satisfied condition "Succeeded or Failed"
Jun  1 12:55:25.964: INFO: Trying to get logs from node kind-worker pod pod-secrets-c6e273ef-31c0-4c8a-a3c7-6bb821f7f162 container secret-volume-test: <nil>
STEP: delete the pod
Jun  1 12:55:25.999: INFO: Waiting for pod pod-secrets-c6e273ef-31c0-4c8a-a3c7-6bb821f7f162 to disappear
Jun  1 12:55:26.001: INFO: Pod pod-secrets-c6e273ef-31c0-4c8a-a3c7-6bb821f7f162 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Jun  1 12:55:26.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7419" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":292,"completed":36,"skipped":556,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Events 
  should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Events
... skipping 11 lines ...
STEP: deleting the test event
STEP: listing all events in all namespaces
[AfterEach] [sig-api-machinery] Events
  test/e2e/framework/framework.go:175
Jun  1 12:55:26.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-9260" for this suite.
•{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":292,"completed":37,"skipped":562,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  test/e2e/framework/framework.go:175
Jun  1 12:55:31.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9160" for this suite.
STEP: Destroying namespace "webhook-9160-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":292,"completed":38,"skipped":571,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 9 lines ...
[It] should be possible to delete [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Jun  1 12:55:32.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9147" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":292,"completed":39,"skipped":601,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 39 lines ...
Jun  1 12:55:42.704: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:41191 --kubeconfig=/root/.kube/kind-test-config explain e2e-test-crd-publish-openapi-6544-crds.spec'
Jun  1 12:55:43.486: INFO: stderr: ""
Jun  1 12:55:43.486: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6544-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Jun  1 12:55:43.486: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:41191 --kubeconfig=/root/.kube/kind-test-config explain e2e-test-crd-publish-openapi-6544-crds.spec.bars'
Jun  1 12:55:44.213: INFO: stderr: ""
Jun  1 12:55:44.214: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6544-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Jun  1 12:55:44.214: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:41191 --kubeconfig=/root/.kube/kind-test-config explain e2e-test-crd-publish-openapi-6544-crds.spec.bars2'
Jun  1 12:55:44.965: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 12:55:48.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9995" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":292,"completed":40,"skipped":608,"failed":0}
SS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 13 lines ...
Jun  1 12:56:44.810: INFO: Restart count of pod container-probe-1203/busybox-47ad3755-57b3-4e91-a6d0-6eeb292a89c5 is now 1 (52.171844404s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Jun  1 12:56:44.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1203" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":292,"completed":41,"skipped":610,"failed":0}
SSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 30 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
Jun  1 12:56:53.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-925" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":292,"completed":42,"skipped":613,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 12 lines ...
STEP: Creating configMap with name cm-test-opt-create-e6daa067-d5c5-4a2b-95ff-d0eac5734956
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Jun  1 12:57:01.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9842" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":43,"skipped":634,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 12:57:13.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6920" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":292,"completed":44,"skipped":644,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 12:57:13.876: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on node default medium
Jun  1 12:57:13.935: INFO: Waiting up to 5m0s for pod "pod-8fd16d31-aa00-4615-b0df-5e73cc7db898" in namespace "emptydir-1115" to be "Succeeded or Failed"
Jun  1 12:57:13.941: INFO: Pod "pod-8fd16d31-aa00-4615-b0df-5e73cc7db898": Phase="Pending", Reason="", readiness=false. Elapsed: 5.528153ms
Jun  1 12:57:15.945: INFO: Pod "pod-8fd16d31-aa00-4615-b0df-5e73cc7db898": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009531714s
Jun  1 12:57:17.950: INFO: Pod "pod-8fd16d31-aa00-4615-b0df-5e73cc7db898": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01470049s
STEP: Saw pod success
Jun  1 12:57:17.950: INFO: Pod "pod-8fd16d31-aa00-4615-b0df-5e73cc7db898" satisfied condition "Succeeded or Failed"
Jun  1 12:57:17.953: INFO: Trying to get logs from node kind-worker2 pod pod-8fd16d31-aa00-4615-b0df-5e73cc7db898 container test-container: <nil>
STEP: delete the pod
Jun  1 12:57:17.973: INFO: Waiting for pod pod-8fd16d31-aa00-4615-b0df-5e73cc7db898 to disappear
Jun  1 12:57:17.976: INFO: Pod pod-8fd16d31-aa00-4615-b0df-5e73cc7db898 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 12:57:17.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1115" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":45,"skipped":651,"failed":0}
SS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
STEP: Updating configmap configmap-test-upd-11827cdf-755f-4d1b-a372-40f31443b2b9
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 12:58:44.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1246" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":46,"skipped":653,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
Jun  1 12:58:44.674: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test override all
Jun  1 12:58:44.746: INFO: Waiting up to 5m0s for pod "client-containers-fc4e8b01-484d-486a-b680-62e0020be05a" in namespace "containers-9239" to be "Succeeded or Failed"
Jun  1 12:58:44.753: INFO: Pod "client-containers-fc4e8b01-484d-486a-b680-62e0020be05a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.691818ms
Jun  1 12:58:46.767: INFO: Pod "client-containers-fc4e8b01-484d-486a-b680-62e0020be05a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021296399s
Jun  1 12:58:48.772: INFO: Pod "client-containers-fc4e8b01-484d-486a-b680-62e0020be05a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026354437s
STEP: Saw pod success
Jun  1 12:58:48.772: INFO: Pod "client-containers-fc4e8b01-484d-486a-b680-62e0020be05a" satisfied condition "Succeeded or Failed"
Jun  1 12:58:48.776: INFO: Trying to get logs from node kind-worker pod client-containers-fc4e8b01-484d-486a-b680-62e0020be05a container test-container: <nil>
STEP: delete the pod
Jun  1 12:58:48.822: INFO: Waiting for pod client-containers-fc4e8b01-484d-486a-b680-62e0020be05a to disappear
Jun  1 12:58:48.830: INFO: Pod client-containers-fc4e8b01-484d-486a-b680-62e0020be05a no longer exists
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
Jun  1 12:58:48.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9239" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":292,"completed":47,"skipped":664,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
Jun  1 12:58:55.883: INFO: stderr: ""
Jun  1 12:58:55.883: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1803-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     <empty>\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 12:58:59.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1877" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":292,"completed":48,"skipped":674,"failed":0}
SSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] version v1
... skipping 338 lines ...
Jun  1 12:59:05.247: INFO: Deleting ReplicationController proxy-service-9hr69 took: 15.407754ms
Jun  1 12:59:05.552: INFO: Terminating ReplicationController proxy-service-9hr69 pods took: 305.573225ms
[AfterEach] version v1
  test/e2e/framework/framework.go:175
Jun  1 12:59:13.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-7156" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":292,"completed":49,"skipped":678,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 45 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 12:59:33.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2708" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":292,"completed":50,"skipped":760,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 34 lines ...
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Jun  1 12:59:34.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-16" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":292,"completed":51,"skipped":770,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
Jun  1 12:59:34.639: INFO: stderr: ""
Jun  1 12:59:34.639: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://127.0.0.1:41191\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://127.0.0.1:41191/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 12:59:34.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4826" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":292,"completed":52,"skipped":804,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-3ae05bff-247f-4af8-9197-32080ff468b5
STEP: Creating a pod to test consume configMaps
Jun  1 12:59:34.731: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6f52d4ef-8337-44f3-bbc5-f3e44c1bd65c" in namespace "projected-4342" to be "Succeeded or Failed"
Jun  1 12:59:34.740: INFO: Pod "pod-projected-configmaps-6f52d4ef-8337-44f3-bbc5-f3e44c1bd65c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.447864ms
Jun  1 12:59:36.748: INFO: Pod "pod-projected-configmaps-6f52d4ef-8337-44f3-bbc5-f3e44c1bd65c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016738795s
Jun  1 12:59:38.753: INFO: Pod "pod-projected-configmaps-6f52d4ef-8337-44f3-bbc5-f3e44c1bd65c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021742777s
STEP: Saw pod success
Jun  1 12:59:38.753: INFO: Pod "pod-projected-configmaps-6f52d4ef-8337-44f3-bbc5-f3e44c1bd65c" satisfied condition "Succeeded or Failed"
Jun  1 12:59:38.757: INFO: Trying to get logs from node kind-worker pod pod-projected-configmaps-6f52d4ef-8337-44f3-bbc5-f3e44c1bd65c container projected-configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 12:59:38.776: INFO: Waiting for pod pod-projected-configmaps-6f52d4ef-8337-44f3-bbc5-f3e44c1bd65c to disappear
Jun  1 12:59:38.780: INFO: Pod pod-projected-configmaps-6f52d4ef-8337-44f3-bbc5-f3e44c1bd65c no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Jun  1 12:59:38.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4342" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":292,"completed":53,"skipped":839,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-f86b1d07-cc76-4486-8c29-c6a80cf64c9a
STEP: Creating a pod to test consume secrets
Jun  1 12:59:38.863: INFO: Waiting up to 5m0s for pod "pod-secrets-961fd4d4-6e1b-486a-9cd6-4c7a5a199c24" in namespace "secrets-4811" to be "Succeeded or Failed"
Jun  1 12:59:38.868: INFO: Pod "pod-secrets-961fd4d4-6e1b-486a-9cd6-4c7a5a199c24": Phase="Pending", Reason="", readiness=false. Elapsed: 4.975556ms
Jun  1 12:59:40.876: INFO: Pod "pod-secrets-961fd4d4-6e1b-486a-9cd6-4c7a5a199c24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012278183s
Jun  1 12:59:42.888: INFO: Pod "pod-secrets-961fd4d4-6e1b-486a-9cd6-4c7a5a199c24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024764919s
STEP: Saw pod success
Jun  1 12:59:42.888: INFO: Pod "pod-secrets-961fd4d4-6e1b-486a-9cd6-4c7a5a199c24" satisfied condition "Succeeded or Failed"
Jun  1 12:59:42.893: INFO: Trying to get logs from node kind-worker2 pod pod-secrets-961fd4d4-6e1b-486a-9cd6-4c7a5a199c24 container secret-volume-test: <nil>
STEP: delete the pod
Jun  1 12:59:42.911: INFO: Waiting for pod pod-secrets-961fd4d4-6e1b-486a-9cd6-4c7a5a199c24 to disappear
Jun  1 12:59:42.917: INFO: Pod pod-secrets-961fd4d4-6e1b-486a-9cd6-4c7a5a199c24 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Jun  1 12:59:42.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4811" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":292,"completed":54,"skipped":853,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 25 lines ...
  test/e2e/framework/framework.go:175
Jun  1 12:59:50.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8691" for this suite.
STEP: Destroying namespace "webhook-8691-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":292,"completed":55,"skipped":882,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-58d9d9fe-6a9c-4be9-8921-e12b7419b058
STEP: Creating a pod to test consume configMaps
Jun  1 12:59:50.908: INFO: Waiting up to 5m0s for pod "pod-configmaps-2b6fa88c-c9dc-4a8d-b125-09405577efc6" in namespace "configmap-1844" to be "Succeeded or Failed"
Jun  1 12:59:50.933: INFO: Pod "pod-configmaps-2b6fa88c-c9dc-4a8d-b125-09405577efc6": Phase="Pending", Reason="", readiness=false. Elapsed: 24.973999ms
Jun  1 12:59:52.937: INFO: Pod "pod-configmaps-2b6fa88c-c9dc-4a8d-b125-09405577efc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029462317s
Jun  1 12:59:54.945: INFO: Pod "pod-configmaps-2b6fa88c-c9dc-4a8d-b125-09405577efc6": Phase="Running", Reason="", readiness=true. Elapsed: 4.03665481s
Jun  1 12:59:56.949: INFO: Pod "pod-configmaps-2b6fa88c-c9dc-4a8d-b125-09405577efc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041015718s
STEP: Saw pod success
Jun  1 12:59:56.949: INFO: Pod "pod-configmaps-2b6fa88c-c9dc-4a8d-b125-09405577efc6" satisfied condition "Succeeded or Failed"
Jun  1 12:59:56.953: INFO: Trying to get logs from node kind-worker pod pod-configmaps-2b6fa88c-c9dc-4a8d-b125-09405577efc6 container configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 12:59:56.972: INFO: Waiting for pod pod-configmaps-2b6fa88c-c9dc-4a8d-b125-09405577efc6 to disappear
Jun  1 12:59:56.976: INFO: Pod pod-configmaps-2b6fa88c-c9dc-4a8d-b125-09405577efc6 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 12:59:56.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1844" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":292,"completed":56,"skipped":890,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Aggregator
... skipping 20 lines ...
[AfterEach] [sig-api-machinery] Aggregator
  test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  test/e2e/framework/framework.go:175
Jun  1 13:00:16.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-5845" for this suite.
•{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":292,"completed":57,"skipped":903,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 10 lines ...
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jun  1 13:00:21.138: INFO: Successfully updated pod "pod-update-activedeadlineseconds-0d0c1170-92eb-468b-b56f-d24dbedd1c4a"
Jun  1 13:00:21.139: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-0d0c1170-92eb-468b-b56f-d24dbedd1c4a" in namespace "pods-9559" to be "terminated due to deadline exceeded"
Jun  1 13:00:21.149: INFO: Pod "pod-update-activedeadlineseconds-0d0c1170-92eb-468b-b56f-d24dbedd1c4a": Phase="Running", Reason="", readiness=true. Elapsed: 10.198112ms
Jun  1 13:00:23.154: INFO: Pod "pod-update-activedeadlineseconds-0d0c1170-92eb-468b-b56f-d24dbedd1c4a": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.014603198s
Jun  1 13:00:23.154: INFO: Pod "pod-update-activedeadlineseconds-0d0c1170-92eb-468b-b56f-d24dbedd1c4a" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Jun  1 13:00:23.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9559" for this suite.
•{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":292,"completed":58,"skipped":983,"failed":0}
SSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 26 lines ...
Jun  1 13:00:43.551: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jun  1 13:00:43.737: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
Jun  1 13:00:43.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-272" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":59,"skipped":990,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 73 lines ...
Jun  1 13:01:23.143: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-697/pods","resourceVersion":"6938"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Jun  1 13:01:23.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-697" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":292,"completed":60,"skipped":1001,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 38 lines ...
• [SLOW TEST:308.245 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":292,"completed":61,"skipped":1016,"failed":0}
SSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 100 lines ...
Jun  1 13:07:55.678: INFO: Waiting for statefulset status.replicas updated to 0
Jun  1 13:07:55.683: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Jun  1 13:07:55.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6745" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":292,"completed":62,"skipped":1023,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 22 lines ...
Jun  1 13:08:25.878: INFO: Waiting for statefulset status.replicas updated to 0
Jun  1 13:08:25.884: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Jun  1 13:08:25.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9835" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":292,"completed":63,"skipped":1034,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-configmap-qmfb
STEP: Creating a pod to test atomic-volume-subpath
Jun  1 13:08:25.958: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-qmfb" in namespace "subpath-5661" to be "Succeeded or Failed"
Jun  1 13:08:25.960: INFO: Pod "pod-subpath-test-configmap-qmfb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.719854ms
Jun  1 13:08:27.966: INFO: Pod "pod-subpath-test-configmap-qmfb": Phase="Running", Reason="", readiness=true. Elapsed: 2.008013076s
Jun  1 13:08:29.976: INFO: Pod "pod-subpath-test-configmap-qmfb": Phase="Running", Reason="", readiness=true. Elapsed: 4.018498771s
Jun  1 13:08:31.983: INFO: Pod "pod-subpath-test-configmap-qmfb": Phase="Running", Reason="", readiness=true. Elapsed: 6.02579844s
Jun  1 13:08:33.988: INFO: Pod "pod-subpath-test-configmap-qmfb": Phase="Running", Reason="", readiness=true. Elapsed: 8.030630261s
Jun  1 13:08:35.996: INFO: Pod "pod-subpath-test-configmap-qmfb": Phase="Running", Reason="", readiness=true. Elapsed: 10.03845927s
... skipping 2 lines ...
Jun  1 13:08:42.009: INFO: Pod "pod-subpath-test-configmap-qmfb": Phase="Running", Reason="", readiness=true. Elapsed: 16.051835248s
Jun  1 13:08:44.015: INFO: Pod "pod-subpath-test-configmap-qmfb": Phase="Running", Reason="", readiness=true. Elapsed: 18.057112775s
Jun  1 13:08:46.020: INFO: Pod "pod-subpath-test-configmap-qmfb": Phase="Running", Reason="", readiness=true. Elapsed: 20.062696064s
Jun  1 13:08:48.025: INFO: Pod "pod-subpath-test-configmap-qmfb": Phase="Running", Reason="", readiness=true. Elapsed: 22.067553751s
Jun  1 13:08:50.032: INFO: Pod "pod-subpath-test-configmap-qmfb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.07440127s
STEP: Saw pod success
Jun  1 13:08:50.032: INFO: Pod "pod-subpath-test-configmap-qmfb" satisfied condition "Succeeded or Failed"
Jun  1 13:08:50.039: INFO: Trying to get logs from node kind-worker pod pod-subpath-test-configmap-qmfb container test-container-subpath-configmap-qmfb: <nil>
STEP: delete the pod
Jun  1 13:08:50.083: INFO: Waiting for pod pod-subpath-test-configmap-qmfb to disappear
Jun  1 13:08:50.087: INFO: Pod pod-subpath-test-configmap-qmfb no longer exists
STEP: Deleting pod pod-subpath-test-configmap-qmfb
Jun  1 13:08:50.087: INFO: Deleting pod "pod-subpath-test-configmap-qmfb" in namespace "subpath-5661"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Jun  1 13:08:50.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5661" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":292,"completed":64,"skipped":1046,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Jun  1 13:08:50.104: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in container's command
Jun  1 13:08:50.147: INFO: Waiting up to 5m0s for pod "var-expansion-aeb1e3b7-a74b-41d3-80a7-dd520e185c2a" in namespace "var-expansion-4260" to be "Succeeded or Failed"
Jun  1 13:08:50.150: INFO: Pod "var-expansion-aeb1e3b7-a74b-41d3-80a7-dd520e185c2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.754079ms
Jun  1 13:08:52.159: INFO: Pod "var-expansion-aeb1e3b7-a74b-41d3-80a7-dd520e185c2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012223574s
Jun  1 13:08:54.165: INFO: Pod "var-expansion-aeb1e3b7-a74b-41d3-80a7-dd520e185c2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017611839s
STEP: Saw pod success
Jun  1 13:08:54.165: INFO: Pod "var-expansion-aeb1e3b7-a74b-41d3-80a7-dd520e185c2a" satisfied condition "Succeeded or Failed"
Jun  1 13:08:54.169: INFO: Trying to get logs from node kind-worker2 pod var-expansion-aeb1e3b7-a74b-41d3-80a7-dd520e185c2a container dapi-container: <nil>
STEP: delete the pod
Jun  1 13:08:54.209: INFO: Waiting for pod var-expansion-aeb1e3b7-a74b-41d3-80a7-dd520e185c2a to disappear
Jun  1 13:08:54.214: INFO: Pod var-expansion-aeb1e3b7-a74b-41d3-80a7-dd520e185c2a no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Jun  1 13:08:54.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4260" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":292,"completed":65,"skipped":1094,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 48 lines ...
Jun  1 13:09:01.608: INFO: stderr: ""
Jun  1 13:09:01.608: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 13:09:01.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6023" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":292,"completed":66,"skipped":1136,"failed":0}
SSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a volume subpath [sig-storage] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Jun  1 13:09:01.622: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a volume subpath [sig-storage] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in volume subpath
Jun  1 13:09:01.670: INFO: Waiting up to 5m0s for pod "var-expansion-e77a7810-ff13-4651-ac0c-775f412148f9" in namespace "var-expansion-9418" to be "Succeeded or Failed"
Jun  1 13:09:01.673: INFO: Pod "var-expansion-e77a7810-ff13-4651-ac0c-775f412148f9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.45526ms
Jun  1 13:09:03.678: INFO: Pod "var-expansion-e77a7810-ff13-4651-ac0c-775f412148f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007953279s
Jun  1 13:09:05.686: INFO: Pod "var-expansion-e77a7810-ff13-4651-ac0c-775f412148f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016508981s
STEP: Saw pod success
Jun  1 13:09:05.687: INFO: Pod "var-expansion-e77a7810-ff13-4651-ac0c-775f412148f9" satisfied condition "Succeeded or Failed"
Jun  1 13:09:05.693: INFO: Trying to get logs from node kind-worker pod var-expansion-e77a7810-ff13-4651-ac0c-775f412148f9 container dapi-container: <nil>
STEP: delete the pod
Jun  1 13:09:05.739: INFO: Waiting for pod var-expansion-e77a7810-ff13-4651-ac0c-775f412148f9 to disappear
Jun  1 13:09:05.744: INFO: Pod var-expansion-e77a7810-ff13-4651-ac0c-775f412148f9 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Jun  1 13:09:05.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9418" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":292,"completed":67,"skipped":1141,"failed":0}
SSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
Jun  1 13:09:09.807: INFO: Initial restart count of pod test-webserver-5b4c5b48-4b1e-4ad4-8a7c-9eef18eb7a09 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Jun  1 13:13:10.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5107" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":292,"completed":68,"skipped":1147,"failed":0}
SS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Jun  1 13:13:14.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-215" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":292,"completed":69,"skipped":1149,"failed":0}
SSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 29 lines ...
Jun  1 13:13:39.132: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jun  1 13:13:39.373: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
Jun  1 13:13:39.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7774" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":292,"completed":70,"skipped":1154,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 2 lines ...
Jun  1 13:13:39.385: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jun  1 13:13:42.465: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Jun  1 13:13:42.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4442" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":292,"completed":71,"skipped":1183,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 13 lines ...
Jun  1 13:13:43.622: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
Jun  1 13:13:44.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4865" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":292,"completed":72,"skipped":1191,"failed":0}
SSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] Pods Extended
... skipping 10 lines ...
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  test/e2e/framework/framework.go:175
Jun  1 13:13:44.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4061" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":292,"completed":73,"skipped":1197,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 5 lines ...
[BeforeEach] [sig-network] Services
  test/e2e/network/service.go:809
[It] should serve multiport endpoints from pods  [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating service multi-endpoint-test in namespace services-5424
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5424 to expose endpoints map[]
Jun  1 13:13:44.825: INFO: Get endpoints failed (12.572291ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jun  1 13:13:45.829: INFO: successfully validated that service multi-endpoint-test in namespace services-5424 exposes endpoints map[] (1.01617899s elapsed)
STEP: Creating pod pod1 in namespace services-5424
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5424 to expose endpoints map[pod1:[100]]
Jun  1 13:13:48.898: INFO: successfully validated that service multi-endpoint-test in namespace services-5424 exposes endpoints map[pod1:[100]] (3.055874344s elapsed)
STEP: Creating pod pod2 in namespace services-5424
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5424 to expose endpoints map[pod1:[100] pod2:[101]]
... skipping 7 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 13:13:54.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5424" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":292,"completed":74,"skipped":1209,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 13:13:54.063: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir volume type on node default medium
Jun  1 13:13:54.121: INFO: Waiting up to 5m0s for pod "pod-ca366a56-1cfe-4469-a0b8-186b2c84d543" in namespace "emptydir-5275" to be "Succeeded or Failed"
Jun  1 13:13:54.125: INFO: Pod "pod-ca366a56-1cfe-4469-a0b8-186b2c84d543": Phase="Pending", Reason="", readiness=false. Elapsed: 4.315527ms
Jun  1 13:13:56.129: INFO: Pod "pod-ca366a56-1cfe-4469-a0b8-186b2c84d543": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00821103s
Jun  1 13:13:58.136: INFO: Pod "pod-ca366a56-1cfe-4469-a0b8-186b2c84d543": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014826201s
STEP: Saw pod success
Jun  1 13:13:58.136: INFO: Pod "pod-ca366a56-1cfe-4469-a0b8-186b2c84d543" satisfied condition "Succeeded or Failed"
Jun  1 13:13:58.140: INFO: Trying to get logs from node kind-worker2 pod pod-ca366a56-1cfe-4469-a0b8-186b2c84d543 container test-container: <nil>
STEP: delete the pod
Jun  1 13:13:58.172: INFO: Waiting for pod pod-ca366a56-1cfe-4469-a0b8-186b2c84d543 to disappear
Jun  1 13:13:58.174: INFO: Pod pod-ca366a56-1cfe-4469-a0b8-186b2c84d543 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 13:13:58.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5275" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":75,"skipped":1227,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 13:13:58.225: INFO: Waiting up to 5m0s for pod "downwardapi-volume-474e7eca-acb8-4e30-bf76-19ee77fcae25" in namespace "downward-api-4379" to be "Succeeded or Failed"
Jun  1 13:13:58.234: INFO: Pod "downwardapi-volume-474e7eca-acb8-4e30-bf76-19ee77fcae25": Phase="Pending", Reason="", readiness=false. Elapsed: 8.609021ms
Jun  1 13:14:00.245: INFO: Pod "downwardapi-volume-474e7eca-acb8-4e30-bf76-19ee77fcae25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019460595s
Jun  1 13:14:02.253: INFO: Pod "downwardapi-volume-474e7eca-acb8-4e30-bf76-19ee77fcae25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02791665s
STEP: Saw pod success
Jun  1 13:14:02.253: INFO: Pod "downwardapi-volume-474e7eca-acb8-4e30-bf76-19ee77fcae25" satisfied condition "Succeeded or Failed"
Jun  1 13:14:02.257: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-474e7eca-acb8-4e30-bf76-19ee77fcae25 container client-container: <nil>
STEP: delete the pod
Jun  1 13:14:02.283: INFO: Waiting for pod downwardapi-volume-474e7eca-acb8-4e30-bf76-19ee77fcae25 to disappear
Jun  1 13:14:02.287: INFO: Pod downwardapi-volume-474e7eca-acb8-4e30-bf76-19ee77fcae25 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Jun  1 13:14:02.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4379" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":292,"completed":76,"skipped":1272,"failed":0}
SSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 26 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 13:14:10.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8407" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":292,"completed":77,"skipped":1279,"failed":0}

------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 10 lines ...
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 13:14:28.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5082" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":292,"completed":78,"skipped":1279,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 13:14:28.769: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jun  1 13:14:28.808: INFO: Waiting up to 5m0s for pod "pod-74462b6c-19bd-4051-8963-1516e74c47a0" in namespace "emptydir-2461" to be "Succeeded or Failed"
Jun  1 13:14:28.812: INFO: Pod "pod-74462b6c-19bd-4051-8963-1516e74c47a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041497ms
Jun  1 13:14:30.817: INFO: Pod "pod-74462b6c-19bd-4051-8963-1516e74c47a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008994629s
Jun  1 13:14:32.822: INFO: Pod "pod-74462b6c-19bd-4051-8963-1516e74c47a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013443074s
STEP: Saw pod success
Jun  1 13:14:32.822: INFO: Pod "pod-74462b6c-19bd-4051-8963-1516e74c47a0" satisfied condition "Succeeded or Failed"
Jun  1 13:14:32.828: INFO: Trying to get logs from node kind-worker2 pod pod-74462b6c-19bd-4051-8963-1516e74c47a0 container test-container: <nil>
STEP: delete the pod
Jun  1 13:14:32.859: INFO: Waiting for pod pod-74462b6c-19bd-4051-8963-1516e74c47a0 to disappear
Jun  1 13:14:32.866: INFO: Pod pod-74462b6c-19bd-4051-8963-1516e74c47a0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 13:14:32.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2461" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":79,"skipped":1284,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 43 lines ...
Jun  1 13:16:23.234: INFO: Waiting for statefulset status.replicas updated to 0
Jun  1 13:16:23.241: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Jun  1 13:16:23.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2994" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":292,"completed":80,"skipped":1311,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Jun  1 13:16:23.280: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in container's args
Jun  1 13:16:23.324: INFO: Waiting up to 5m0s for pod "var-expansion-6d640496-9e45-4501-809d-b1a23adc212a" in namespace "var-expansion-9080" to be "Succeeded or Failed"
Jun  1 13:16:23.327: INFO: Pod "var-expansion-6d640496-9e45-4501-809d-b1a23adc212a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.256712ms
Jun  1 13:16:25.332: INFO: Pod "var-expansion-6d640496-9e45-4501-809d-b1a23adc212a": Phase="Running", Reason="", readiness=true. Elapsed: 2.008105129s
Jun  1 13:16:27.336: INFO: Pod "var-expansion-6d640496-9e45-4501-809d-b1a23adc212a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012711673s
STEP: Saw pod success
Jun  1 13:16:27.337: INFO: Pod "var-expansion-6d640496-9e45-4501-809d-b1a23adc212a" satisfied condition "Succeeded or Failed"
Jun  1 13:16:27.344: INFO: Trying to get logs from node kind-worker pod var-expansion-6d640496-9e45-4501-809d-b1a23adc212a container dapi-container: <nil>
STEP: delete the pod
Jun  1 13:16:27.375: INFO: Waiting for pod var-expansion-6d640496-9e45-4501-809d-b1a23adc212a to disappear
Jun  1 13:16:27.380: INFO: Pod var-expansion-6d640496-9e45-4501-809d-b1a23adc212a no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Jun  1 13:16:27.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9080" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":292,"completed":81,"skipped":1328,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-35f99408-7f6c-4f0d-8ab8-9a28978a2e23
STEP: Creating a pod to test consume secrets
Jun  1 13:16:27.432: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b29cceed-2995-4321-bf8e-44f9b0207201" in namespace "projected-8737" to be "Succeeded or Failed"
Jun  1 13:16:27.436: INFO: Pod "pod-projected-secrets-b29cceed-2995-4321-bf8e-44f9b0207201": Phase="Pending", Reason="", readiness=false. Elapsed: 3.707033ms
Jun  1 13:16:29.440: INFO: Pod "pod-projected-secrets-b29cceed-2995-4321-bf8e-44f9b0207201": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007963212s
Jun  1 13:16:31.446: INFO: Pod "pod-projected-secrets-b29cceed-2995-4321-bf8e-44f9b0207201": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013508367s
STEP: Saw pod success
Jun  1 13:16:31.446: INFO: Pod "pod-projected-secrets-b29cceed-2995-4321-bf8e-44f9b0207201" satisfied condition "Succeeded or Failed"
Jun  1 13:16:31.451: INFO: Trying to get logs from node kind-worker2 pod pod-projected-secrets-b29cceed-2995-4321-bf8e-44f9b0207201 container projected-secret-volume-test: <nil>
STEP: delete the pod
Jun  1 13:16:31.488: INFO: Waiting for pod pod-projected-secrets-b29cceed-2995-4321-bf8e-44f9b0207201 to disappear
Jun  1 13:16:31.492: INFO: Pod pod-projected-secrets-b29cceed-2995-4321-bf8e-44f9b0207201 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Jun  1 13:16:31.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8737" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":292,"completed":82,"skipped":1353,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
Jun  1 13:16:31.542: INFO: >>> kubeConfig: /root/.kube/kind-test-config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 13:16:32.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2432" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":292,"completed":83,"skipped":1390,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 13:16:32.192: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ccce04df-d192-4187-91a8-7f399e848cd8" in namespace "projected-0" to be "Succeeded or Failed"
Jun  1 13:16:32.197: INFO: Pod "downwardapi-volume-ccce04df-d192-4187-91a8-7f399e848cd8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.872127ms
Jun  1 13:16:34.208: INFO: Pod "downwardapi-volume-ccce04df-d192-4187-91a8-7f399e848cd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016175168s
Jun  1 13:16:36.217: INFO: Pod "downwardapi-volume-ccce04df-d192-4187-91a8-7f399e848cd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024624291s
STEP: Saw pod success
Jun  1 13:16:36.217: INFO: Pod "downwardapi-volume-ccce04df-d192-4187-91a8-7f399e848cd8" satisfied condition "Succeeded or Failed"
Jun  1 13:16:36.222: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-ccce04df-d192-4187-91a8-7f399e848cd8 container client-container: <nil>
STEP: delete the pod
Jun  1 13:16:36.250: INFO: Waiting for pod downwardapi-volume-ccce04df-d192-4187-91a8-7f399e848cd8 to disappear
Jun  1 13:16:36.253: INFO: Pod downwardapi-volume-ccce04df-d192-4187-91a8-7f399e848cd8 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Jun  1 13:16:36.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-0" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":84,"skipped":1408,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
Jun  1 13:16:39.338: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Jun  1 13:16:39.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7045" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":292,"completed":85,"skipped":1428,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 9 lines ...
STEP: Creating the pod
Jun  1 13:16:43.985: INFO: Successfully updated pod "labelsupdateb979c4d0-7079-4fea-894b-c31504c2fd68"
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Jun  1 13:16:46.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1629" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":292,"completed":86,"skipped":1474,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Jun  1 13:16:57.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4599" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":292,"completed":87,"skipped":1560,"failed":0}

------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 9 lines ...
STEP: Creating the pod
Jun  1 13:16:59.708: INFO: Successfully updated pod "labelsupdate26f9cf49-ad20-4d49-bc01-426cd9df8022"
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Jun  1 13:17:01.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7724" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":292,"completed":88,"skipped":1560,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 13:17:01.737: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir volume type on tmpfs
Jun  1 13:17:01.777: INFO: Waiting up to 5m0s for pod "pod-33d3a8ff-e61f-446b-9040-902bda7e4422" in namespace "emptydir-6892" to be "Succeeded or Failed"
Jun  1 13:17:01.780: INFO: Pod "pod-33d3a8ff-e61f-446b-9040-902bda7e4422": Phase="Pending", Reason="", readiness=false. Elapsed: 2.680682ms
Jun  1 13:17:03.792: INFO: Pod "pod-33d3a8ff-e61f-446b-9040-902bda7e4422": Phase="Running", Reason="", readiness=true. Elapsed: 2.015202237s
Jun  1 13:17:05.801: INFO: Pod "pod-33d3a8ff-e61f-446b-9040-902bda7e4422": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02425976s
STEP: Saw pod success
Jun  1 13:17:05.802: INFO: Pod "pod-33d3a8ff-e61f-446b-9040-902bda7e4422" satisfied condition "Succeeded or Failed"
Jun  1 13:17:05.811: INFO: Trying to get logs from node kind-worker2 pod pod-33d3a8ff-e61f-446b-9040-902bda7e4422 container test-container: <nil>
STEP: delete the pod
Jun  1 13:17:05.843: INFO: Waiting for pod pod-33d3a8ff-e61f-446b-9040-902bda7e4422 to disappear
Jun  1 13:17:05.846: INFO: Pod pod-33d3a8ff-e61f-446b-9040-902bda7e4422 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 13:17:05.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6892" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":89,"skipped":1570,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 8 lines ...
Jun  1 13:17:05.906: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jun  1 13:17:09.589: INFO: >>> kubeConfig: /root/.kube/kind-test-config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 13:17:23.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2960" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":292,"completed":90,"skipped":1580,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 13:17:30.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-7122" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/crd_conversion_webhook.go:137
•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":292,"completed":91,"skipped":1590,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 25 lines ...
  test/e2e/framework/framework.go:175
Jun  1 13:17:36.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4847" for this suite.
STEP: Destroying namespace "webhook-4847-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":292,"completed":92,"skipped":1628,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 13 lines ...
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Jun  1 13:18:05.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3482" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":292,"completed":93,"skipped":1657,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 22 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 13:18:14.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2227" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":292,"completed":94,"skipped":1679,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Jun  1 13:18:25.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3963" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":292,"completed":95,"skipped":1684,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 13:18:25.913: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jun  1 13:18:25.968: INFO: Waiting up to 5m0s for pod "pod-13a1197c-3917-4cbb-a790-57a0d849c315" in namespace "emptydir-4480" to be "Succeeded or Failed"
Jun  1 13:18:25.972: INFO: Pod "pod-13a1197c-3917-4cbb-a790-57a0d849c315": Phase="Pending", Reason="", readiness=false. Elapsed: 4.361176ms
Jun  1 13:18:27.976: INFO: Pod "pod-13a1197c-3917-4cbb-a790-57a0d849c315": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008303758s
Jun  1 13:18:29.981: INFO: Pod "pod-13a1197c-3917-4cbb-a790-57a0d849c315": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013476995s
STEP: Saw pod success
Jun  1 13:18:29.981: INFO: Pod "pod-13a1197c-3917-4cbb-a790-57a0d849c315" satisfied condition "Succeeded or Failed"
Jun  1 13:18:29.985: INFO: Trying to get logs from node kind-worker2 pod pod-13a1197c-3917-4cbb-a790-57a0d849c315 container test-container: <nil>
STEP: delete the pod
Jun  1 13:18:30.003: INFO: Waiting for pod pod-13a1197c-3917-4cbb-a790-57a0d849c315 to disappear
Jun  1 13:18:30.006: INFO: Pod pod-13a1197c-3917-4cbb-a790-57a0d849c315 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 13:18:30.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4480" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":96,"skipped":1695,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 10 lines ...
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Jun  1 13:18:32.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7931" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":292,"completed":97,"skipped":1703,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 28 lines ...
  test/e2e/framework/framework.go:175
Jun  1 13:18:47.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2827" for this suite.
STEP: Destroying namespace "webhook-2827-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":292,"completed":98,"skipped":1706,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 15 lines ...
Jun  1 13:18:53.945: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  test/e2e/framework/framework.go:597
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 13:19:06.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5546" for this suite.
STEP: Destroying namespace "webhook-5546-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":292,"completed":99,"skipped":1707,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 19 lines ...
Jun  1 13:19:09.278: INFO: Deleting pod "var-expansion-e0acea0a-5a73-4125-b2de-ba095db32149" in namespace "var-expansion-3644"
Jun  1 13:19:09.293: INFO: Wait up to 5m0s for pod "var-expansion-e0acea0a-5a73-4125-b2de-ba095db32149" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Jun  1 13:19:45.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3644" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":292,"completed":100,"skipped":1727,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 57 lines ...
Jun  1 13:21:57.844: INFO: Waiting for statefulset status.replicas updated to 0
Jun  1 13:21:57.852: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Jun  1 13:21:57.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3496" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":292,"completed":101,"skipped":1743,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 17 lines ...
Jun  1 13:24:24.360: INFO: Restart count of pod container-probe-8777/liveness-08599974-a0b8-4f08-8573-c2271cfbf32d is now 5 (2m24.418814642s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Jun  1 13:24:24.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8777" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":292,"completed":102,"skipped":1756,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] 
  evicts pods with minTolerationSeconds [Disruptive] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
... skipping 19 lines ...
Jun  1 13:26:03.095: INFO: Noticed Pod "taint-eviction-b2" gets evicted.
STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute
[AfterEach] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
  test/e2e/framework/framework.go:175
Jun  1 13:26:03.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "taint-multiple-pods-5907" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","total":292,"completed":103,"skipped":1765,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 26 lines ...
  test/e2e/framework/framework.go:175
Jun  1 13:26:08.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1558" for this suite.
STEP: Destroying namespace "webhook-1558-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":292,"completed":104,"skipped":1822,"failed":0}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 60 lines ...
Jun  1 13:26:17.605: INFO: stderr: ""
Jun  1 13:26:17.605: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 13:26:17.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4372" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":292,"completed":105,"skipped":1830,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 13:26:17.669: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0d09fd14-aeb8-47e6-b07b-c0b0c9920f58" in namespace "downward-api-4695" to be "Succeeded or Failed"
Jun  1 13:26:17.679: INFO: Pod "downwardapi-volume-0d09fd14-aeb8-47e6-b07b-c0b0c9920f58": Phase="Pending", Reason="", readiness=false. Elapsed: 9.827951ms
Jun  1 13:26:19.689: INFO: Pod "downwardapi-volume-0d09fd14-aeb8-47e6-b07b-c0b0c9920f58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019503337s
Jun  1 13:26:21.697: INFO: Pod "downwardapi-volume-0d09fd14-aeb8-47e6-b07b-c0b0c9920f58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027858172s
STEP: Saw pod success
Jun  1 13:26:21.697: INFO: Pod "downwardapi-volume-0d09fd14-aeb8-47e6-b07b-c0b0c9920f58" satisfied condition "Succeeded or Failed"
Jun  1 13:26:21.704: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-0d09fd14-aeb8-47e6-b07b-c0b0c9920f58 container client-container: <nil>
STEP: delete the pod
Jun  1 13:26:21.742: INFO: Waiting for pod downwardapi-volume-0d09fd14-aeb8-47e6-b07b-c0b0c9920f58 to disappear
Jun  1 13:26:21.748: INFO: Pod downwardapi-volume-0d09fd14-aeb8-47e6-b07b-c0b0c9920f58 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Jun  1 13:26:21.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4695" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":292,"completed":106,"skipped":1872,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 35 lines ...
Jun  1 13:28:37.100: INFO: Deleting pod "var-expansion-6fd451ca-b207-4f2e-8a5f-c552df066e6a" in namespace "var-expansion-7747"
Jun  1 13:28:37.107: INFO: Wait up to 5m0s for pod "var-expansion-6fd451ca-b207-4f2e-8a5f-c552df066e6a" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Jun  1 13:29:11.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7747" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]","total":292,"completed":107,"skipped":1882,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-93a49a52-3120-4965-b535-a90b194e015f
STEP: Creating a pod to test consume configMaps
Jun  1 13:29:11.177: INFO: Waiting up to 5m0s for pod "pod-configmaps-2936bb58-de26-42c1-b429-af909894ea97" in namespace "configmap-3849" to be "Succeeded or Failed"
Jun  1 13:29:11.183: INFO: Pod "pod-configmaps-2936bb58-de26-42c1-b429-af909894ea97": Phase="Pending", Reason="", readiness=false. Elapsed: 6.146942ms
Jun  1 13:29:13.189: INFO: Pod "pod-configmaps-2936bb58-de26-42c1-b429-af909894ea97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012279344s
Jun  1 13:29:15.195: INFO: Pod "pod-configmaps-2936bb58-de26-42c1-b429-af909894ea97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018095858s
STEP: Saw pod success
Jun  1 13:29:15.195: INFO: Pod "pod-configmaps-2936bb58-de26-42c1-b429-af909894ea97" satisfied condition "Succeeded or Failed"
Jun  1 13:29:15.200: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-2936bb58-de26-42c1-b429-af909894ea97 container configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 13:29:15.242: INFO: Waiting for pod pod-configmaps-2936bb58-de26-42c1-b429-af909894ea97 to disappear
Jun  1 13:29:15.245: INFO: Pod pod-configmaps-2936bb58-de26-42c1-b429-af909894ea97 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 13:29:15.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3849" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":108,"skipped":1891,"failed":0}
S
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 26 lines ...
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jun  1 13:29:23.426: INFO: File wheezy_udp@dns-test-service-3.dns-7196.svc.cluster.local from pod  dns-7196/dns-test-a13b9cce-6a1c-4749-b6d6-16cdd977b8c4 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun  1 13:29:23.430: INFO: File jessie_udp@dns-test-service-3.dns-7196.svc.cluster.local from pod  dns-7196/dns-test-a13b9cce-6a1c-4749-b6d6-16cdd977b8c4 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun  1 13:29:23.431: INFO: Lookups using dns-7196/dns-test-a13b9cce-6a1c-4749-b6d6-16cdd977b8c4 failed for: [wheezy_udp@dns-test-service-3.dns-7196.svc.cluster.local jessie_udp@dns-test-service-3.dns-7196.svc.cluster.local]

Jun  1 13:29:28.440: INFO: File wheezy_udp@dns-test-service-3.dns-7196.svc.cluster.local from pod  dns-7196/dns-test-a13b9cce-6a1c-4749-b6d6-16cdd977b8c4 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun  1 13:29:28.445: INFO: File jessie_udp@dns-test-service-3.dns-7196.svc.cluster.local from pod  dns-7196/dns-test-a13b9cce-6a1c-4749-b6d6-16cdd977b8c4 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun  1 13:29:28.445: INFO: Lookups using dns-7196/dns-test-a13b9cce-6a1c-4749-b6d6-16cdd977b8c4 failed for: [wheezy_udp@dns-test-service-3.dns-7196.svc.cluster.local jessie_udp@dns-test-service-3.dns-7196.svc.cluster.local]

Jun  1 13:29:33.439: INFO: File wheezy_udp@dns-test-service-3.dns-7196.svc.cluster.local from pod  dns-7196/dns-test-a13b9cce-6a1c-4749-b6d6-16cdd977b8c4 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun  1 13:29:33.445: INFO: File jessie_udp@dns-test-service-3.dns-7196.svc.cluster.local from pod  dns-7196/dns-test-a13b9cce-6a1c-4749-b6d6-16cdd977b8c4 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun  1 13:29:33.445: INFO: Lookups using dns-7196/dns-test-a13b9cce-6a1c-4749-b6d6-16cdd977b8c4 failed for: [wheezy_udp@dns-test-service-3.dns-7196.svc.cluster.local jessie_udp@dns-test-service-3.dns-7196.svc.cluster.local]

Jun  1 13:29:38.436: INFO: File wheezy_udp@dns-test-service-3.dns-7196.svc.cluster.local from pod  dns-7196/dns-test-a13b9cce-6a1c-4749-b6d6-16cdd977b8c4 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun  1 13:29:38.440: INFO: File jessie_udp@dns-test-service-3.dns-7196.svc.cluster.local from pod  dns-7196/dns-test-a13b9cce-6a1c-4749-b6d6-16cdd977b8c4 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun  1 13:29:38.440: INFO: Lookups using dns-7196/dns-test-a13b9cce-6a1c-4749-b6d6-16cdd977b8c4 failed for: [wheezy_udp@dns-test-service-3.dns-7196.svc.cluster.local jessie_udp@dns-test-service-3.dns-7196.svc.cluster.local]

Jun  1 13:29:43.441: INFO: File wheezy_udp@dns-test-service-3.dns-7196.svc.cluster.local from pod  dns-7196/dns-test-a13b9cce-6a1c-4749-b6d6-16cdd977b8c4 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun  1 13:29:43.445: INFO: File jessie_udp@dns-test-service-3.dns-7196.svc.cluster.local from pod  dns-7196/dns-test-a13b9cce-6a1c-4749-b6d6-16cdd977b8c4 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun  1 13:29:43.445: INFO: Lookups using dns-7196/dns-test-a13b9cce-6a1c-4749-b6d6-16cdd977b8c4 failed for: [wheezy_udp@dns-test-service-3.dns-7196.svc.cluster.local jessie_udp@dns-test-service-3.dns-7196.svc.cluster.local]

Jun  1 13:29:48.440: INFO: File jessie_udp@dns-test-service-3.dns-7196.svc.cluster.local from pod  dns-7196/dns-test-a13b9cce-6a1c-4749-b6d6-16cdd977b8c4 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun  1 13:29:48.440: INFO: Lookups using dns-7196/dns-test-a13b9cce-6a1c-4749-b6d6-16cdd977b8c4 failed for: [jessie_udp@dns-test-service-3.dns-7196.svc.cluster.local]

Jun  1 13:29:53.445: INFO: DNS probes using dns-test-a13b9cce-6a1c-4749-b6d6-16cdd977b8c4 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7196.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7196.svc.cluster.local; sleep 1; done
... skipping 9 lines ...
STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Jun  1 13:29:57.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7196" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":292,"completed":109,"skipped":1892,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-5579a7a3-7c6a-40c9-8996-1e566dcfcc24
STEP: Creating a pod to test consume secrets
Jun  1 13:29:57.668: INFO: Waiting up to 5m0s for pod "pod-secrets-3c0066b8-49c6-464c-b3e7-1a11ffacea97" in namespace "secrets-1902" to be "Succeeded or Failed"
Jun  1 13:29:57.673: INFO: Pod "pod-secrets-3c0066b8-49c6-464c-b3e7-1a11ffacea97": Phase="Pending", Reason="", readiness=false. Elapsed: 4.366537ms
Jun  1 13:29:59.678: INFO: Pod "pod-secrets-3c0066b8-49c6-464c-b3e7-1a11ffacea97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008993173s
Jun  1 13:30:01.690: INFO: Pod "pod-secrets-3c0066b8-49c6-464c-b3e7-1a11ffacea97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021063754s
STEP: Saw pod success
Jun  1 13:30:01.690: INFO: Pod "pod-secrets-3c0066b8-49c6-464c-b3e7-1a11ffacea97" satisfied condition "Succeeded or Failed"
Jun  1 13:30:01.699: INFO: Trying to get logs from node kind-worker2 pod pod-secrets-3c0066b8-49c6-464c-b3e7-1a11ffacea97 container secret-env-test: <nil>
STEP: delete the pod
Jun  1 13:30:01.721: INFO: Waiting for pod pod-secrets-3c0066b8-49c6-464c-b3e7-1a11ffacea97 to disappear
Jun  1 13:30:01.725: INFO: Pod pod-secrets-3c0066b8-49c6-464c-b3e7-1a11ffacea97 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
Jun  1 13:30:01.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1902" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":292,"completed":110,"skipped":1899,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] Events
... skipping 16 lines ...
Jun  1 13:30:09.825: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  test/e2e/framework/framework.go:175
Jun  1 13:30:09.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-3487" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":292,"completed":111,"skipped":1947,"failed":0}

------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 19 lines ...
Jun  1 13:30:29.912: INFO: The status of Pod test-webserver-57c0f0f4-1594-440b-bf18-6f59058a1596 is Running (Ready = true)
Jun  1 13:30:29.916: INFO: Container started at 2020-06-01 13:30:11 +0000 UTC, pod became ready at 2020-06-01 13:30:28 +0000 UTC
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Jun  1 13:30:29.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3619" for this suite.
•{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":292,"completed":112,"skipped":1947,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 114 lines ...
Jun  1 13:31:03.210: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-777/pods","resourceVersion":"15198"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Jun  1 13:31:03.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-777" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":292,"completed":113,"skipped":1965,"failed":0}
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 13:31:03.296: INFO: Waiting up to 5m0s for pod "downwardapi-volume-04398589-8ce7-42c9-9367-3156e69e7872" in namespace "downward-api-4038" to be "Succeeded or Failed"
Jun  1 13:31:03.299: INFO: Pod "downwardapi-volume-04398589-8ce7-42c9-9367-3156e69e7872": Phase="Pending", Reason="", readiness=false. Elapsed: 2.962047ms
Jun  1 13:31:05.305: INFO: Pod "downwardapi-volume-04398589-8ce7-42c9-9367-3156e69e7872": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009156597s
Jun  1 13:31:07.310: INFO: Pod "downwardapi-volume-04398589-8ce7-42c9-9367-3156e69e7872": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013568394s
STEP: Saw pod success
Jun  1 13:31:07.310: INFO: Pod "downwardapi-volume-04398589-8ce7-42c9-9367-3156e69e7872" satisfied condition "Succeeded or Failed"
Jun  1 13:31:07.313: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-04398589-8ce7-42c9-9367-3156e69e7872 container client-container: <nil>
STEP: delete the pod
Jun  1 13:31:07.348: INFO: Waiting for pod downwardapi-volume-04398589-8ce7-42c9-9367-3156e69e7872 to disappear
Jun  1 13:31:07.351: INFO: Pod downwardapi-volume-04398589-8ce7-42c9-9367-3156e69e7872 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Jun  1 13:31:07.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4038" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":292,"completed":114,"skipped":1968,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 82 lines ...
Jun  1 13:31:29.252: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8871/pods","resourceVersion":"15387"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Jun  1 13:31:29.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8871" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":292,"completed":115,"skipped":1990,"failed":0}
SS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  1 13:31:29.277: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap that has name configmap-test-emptyKey-8095b6d4-fb7b-463b-95bc-5463f2be1951
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 13:31:29.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3406" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":292,"completed":116,"skipped":1992,"failed":0}
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 33 lines ...

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
W0601 13:31:35.400537   11738 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jun  1 13:31:35.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1088" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":292,"completed":117,"skipped":1994,"failed":0}
SSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Service endpoints latency
... skipping 418 lines ...
Jun  1 13:31:47.240: INFO: 99 %ile: 773.636224ms
Jun  1 13:31:47.240: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  test/e2e/framework/framework.go:175
Jun  1 13:31:47.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-8107" for this suite.
•{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":292,"completed":118,"skipped":2001,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
Jun  1 13:31:54.509: INFO: stderr: ""
Jun  1 13:31:54.509: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7337-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     <empty>\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 13:31:58.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7819" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":292,"completed":119,"skipped":2015,"failed":0}
SSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 25 lines ...
Jun  1 13:32:03.466: INFO: Pod "test-cleanup-deployment-6688745694-v2txf" is not available:
&Pod{ObjectMeta:{test-cleanup-deployment-6688745694-v2txf test-cleanup-deployment-6688745694- deployment-2673 /api/v1/namespaces/deployment-2673/pods/test-cleanup-deployment-6688745694-v2txf fab7d547-6592-4b70-9d56-fc807a45844d 17521 0 2020-06-01 13:32:03 +0000 UTC <nil> <nil> map[name:cleanup-pod pod-template-hash:6688745694] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-6688745694 787fd7d9-08b7-4900-821f-47c148d9df68 0xc006d97287 0xc006d97288}] []  [{kube-controller-manager Update v1 2020-06-01 13:32:03 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"787fd7d9-08b7-4900-821f-47c148d9df68\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-99tgs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-99tgs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-99tgs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 13:32:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Jun  1 13:32:03.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2673" for this suite.
•{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":292,"completed":120,"skipped":2018,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-410c884d-dc6d-47a3-a925-f852d0f5c498
STEP: Creating a pod to test consume configMaps
Jun  1 13:32:03.527: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ff739d37-830d-445b-a38e-97ffe249d9e8" in namespace "projected-3729" to be "Succeeded or Failed"
Jun  1 13:32:03.534: INFO: Pod "pod-projected-configmaps-ff739d37-830d-445b-a38e-97ffe249d9e8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.403605ms
Jun  1 13:32:05.547: INFO: Pod "pod-projected-configmaps-ff739d37-830d-445b-a38e-97ffe249d9e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.019993558s
STEP: Saw pod success
Jun  1 13:32:05.547: INFO: Pod "pod-projected-configmaps-ff739d37-830d-445b-a38e-97ffe249d9e8" satisfied condition "Succeeded or Failed"
Jun  1 13:32:05.556: INFO: Trying to get logs from node kind-worker2 pod pod-projected-configmaps-ff739d37-830d-445b-a38e-97ffe249d9e8 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 13:32:05.601: INFO: Waiting for pod pod-projected-configmaps-ff739d37-830d-445b-a38e-97ffe249d9e8 to disappear
Jun  1 13:32:05.605: INFO: Pod pod-projected-configmaps-ff739d37-830d-445b-a38e-97ffe249d9e8 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Jun  1 13:32:05.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3729" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":292,"completed":121,"skipped":2058,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 13:32:05.668: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5bbeb57e-3082-4119-a268-501ed9a46420" in namespace "projected-5383" to be "Succeeded or Failed"
Jun  1 13:32:05.671: INFO: Pod "downwardapi-volume-5bbeb57e-3082-4119-a268-501ed9a46420": Phase="Pending", Reason="", readiness=false. Elapsed: 3.412538ms
Jun  1 13:32:07.681: INFO: Pod "downwardapi-volume-5bbeb57e-3082-4119-a268-501ed9a46420": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013179616s
Jun  1 13:32:09.687: INFO: Pod "downwardapi-volume-5bbeb57e-3082-4119-a268-501ed9a46420": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019529003s
STEP: Saw pod success
Jun  1 13:32:09.687: INFO: Pod "downwardapi-volume-5bbeb57e-3082-4119-a268-501ed9a46420" satisfied condition "Succeeded or Failed"
Jun  1 13:32:09.693: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-5bbeb57e-3082-4119-a268-501ed9a46420 container client-container: <nil>
STEP: delete the pod
Jun  1 13:32:09.730: INFO: Waiting for pod downwardapi-volume-5bbeb57e-3082-4119-a268-501ed9a46420 to disappear
Jun  1 13:32:09.737: INFO: Pod downwardapi-volume-5bbeb57e-3082-4119-a268-501ed9a46420 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Jun  1 13:32:09.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5383" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":292,"completed":122,"skipped":2069,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 65 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 13:32:33.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5046" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":292,"completed":123,"skipped":2079,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 13:32:33.251: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1e988437-3371-46de-964f-114cc761533b" in namespace "projected-954" to be "Succeeded or Failed"
Jun  1 13:32:33.257: INFO: Pod "downwardapi-volume-1e988437-3371-46de-964f-114cc761533b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.039158ms
Jun  1 13:32:35.265: INFO: Pod "downwardapi-volume-1e988437-3371-46de-964f-114cc761533b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013513436s
Jun  1 13:32:37.272: INFO: Pod "downwardapi-volume-1e988437-3371-46de-964f-114cc761533b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020614268s
STEP: Saw pod success
Jun  1 13:32:37.273: INFO: Pod "downwardapi-volume-1e988437-3371-46de-964f-114cc761533b" satisfied condition "Succeeded or Failed"
Jun  1 13:32:37.277: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-1e988437-3371-46de-964f-114cc761533b container client-container: <nil>
STEP: delete the pod
Jun  1 13:32:37.297: INFO: Waiting for pod downwardapi-volume-1e988437-3371-46de-964f-114cc761533b to disappear
Jun  1 13:32:37.301: INFO: Pod downwardapi-volume-1e988437-3371-46de-964f-114cc761533b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Jun  1 13:32:37.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-954" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":292,"completed":124,"skipped":2111,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-6fc7db75-5a79-426d-b4f7-699ba9d8b787
STEP: Creating a pod to test consume secrets
Jun  1 13:32:37.416: INFO: Waiting up to 5m0s for pod "pod-secrets-2a115729-efee-4bc8-8f91-4283aae370e5" in namespace "secrets-3176" to be "Succeeded or Failed"
Jun  1 13:32:37.421: INFO: Pod "pod-secrets-2a115729-efee-4bc8-8f91-4283aae370e5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.650749ms
Jun  1 13:32:39.425: INFO: Pod "pod-secrets-2a115729-efee-4bc8-8f91-4283aae370e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009035532s
Jun  1 13:32:41.435: INFO: Pod "pod-secrets-2a115729-efee-4bc8-8f91-4283aae370e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019272197s
STEP: Saw pod success
Jun  1 13:32:41.435: INFO: Pod "pod-secrets-2a115729-efee-4bc8-8f91-4283aae370e5" satisfied condition "Succeeded or Failed"
Jun  1 13:32:41.440: INFO: Trying to get logs from node kind-worker2 pod pod-secrets-2a115729-efee-4bc8-8f91-4283aae370e5 container secret-volume-test: <nil>
STEP: delete the pod
Jun  1 13:32:41.461: INFO: Waiting for pod pod-secrets-2a115729-efee-4bc8-8f91-4283aae370e5 to disappear
Jun  1 13:32:41.465: INFO: Pod pod-secrets-2a115729-efee-4bc8-8f91-4283aae370e5 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Jun  1 13:32:41.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3176" for this suite.
STEP: Destroying namespace "secret-namespace-6089" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":292,"completed":125,"skipped":2126,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 13:32:53.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5393" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":292,"completed":126,"skipped":2147,"failed":0}
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-configmap-t98t
STEP: Creating a pod to test atomic-volume-subpath
Jun  1 13:32:54.049: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-t98t" in namespace "subpath-835" to be "Succeeded or Failed"
Jun  1 13:32:54.054: INFO: Pod "pod-subpath-test-configmap-t98t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.738439ms
Jun  1 13:32:56.059: INFO: Pod "pod-subpath-test-configmap-t98t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010069233s
Jun  1 13:32:58.065: INFO: Pod "pod-subpath-test-configmap-t98t": Phase="Running", Reason="", readiness=true. Elapsed: 4.015602918s
Jun  1 13:33:00.072: INFO: Pod "pod-subpath-test-configmap-t98t": Phase="Running", Reason="", readiness=true. Elapsed: 6.023143304s
Jun  1 13:33:02.080: INFO: Pod "pod-subpath-test-configmap-t98t": Phase="Running", Reason="", readiness=true. Elapsed: 8.031284853s
Jun  1 13:33:04.089: INFO: Pod "pod-subpath-test-configmap-t98t": Phase="Running", Reason="", readiness=true. Elapsed: 10.040113766s
... skipping 2 lines ...
Jun  1 13:33:10.105: INFO: Pod "pod-subpath-test-configmap-t98t": Phase="Running", Reason="", readiness=true. Elapsed: 16.056138931s
Jun  1 13:33:12.112: INFO: Pod "pod-subpath-test-configmap-t98t": Phase="Running", Reason="", readiness=true. Elapsed: 18.062827677s
Jun  1 13:33:14.117: INFO: Pod "pod-subpath-test-configmap-t98t": Phase="Running", Reason="", readiness=true. Elapsed: 20.068020373s
Jun  1 13:33:16.122: INFO: Pod "pod-subpath-test-configmap-t98t": Phase="Running", Reason="", readiness=true. Elapsed: 22.072963381s
Jun  1 13:33:18.127: INFO: Pod "pod-subpath-test-configmap-t98t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.078488995s
STEP: Saw pod success
Jun  1 13:33:18.127: INFO: Pod "pod-subpath-test-configmap-t98t" satisfied condition "Succeeded or Failed"
Jun  1 13:33:18.131: INFO: Trying to get logs from node kind-worker pod pod-subpath-test-configmap-t98t container test-container-subpath-configmap-t98t: <nil>
STEP: delete the pod
Jun  1 13:33:18.149: INFO: Waiting for pod pod-subpath-test-configmap-t98t to disappear
Jun  1 13:33:18.152: INFO: Pod pod-subpath-test-configmap-t98t no longer exists
STEP: Deleting pod pod-subpath-test-configmap-t98t
Jun  1 13:33:18.152: INFO: Deleting pod "pod-subpath-test-configmap-t98t" in namespace "subpath-835"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Jun  1 13:33:18.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-835" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":292,"completed":127,"skipped":2149,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Jun  1 13:33:22.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2104" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":128,"skipped":2164,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  1 13:33:22.239: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  test/e2e/apps/daemon_set.go:134
[It] should retry creating failed daemon pods [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jun  1 13:33:22.307: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun  1 13:33:22.317: INFO: Number of nodes with available pods: 0
Jun  1 13:33:22.317: INFO: Node kind-worker is running more than one daemon pod
... skipping 3 lines ...
Jun  1 13:33:24.324: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun  1 13:33:24.329: INFO: Number of nodes with available pods: 0
Jun  1 13:33:24.329: INFO: Node kind-worker is running more than one daemon pod
Jun  1 13:33:25.328: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun  1 13:33:25.334: INFO: Number of nodes with available pods: 2
Jun  1 13:33:25.334: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jun  1 13:33:25.358: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun  1 13:33:25.364: INFO: Number of nodes with available pods: 1
Jun  1 13:33:25.364: INFO: Node kind-worker is running more than one daemon pod
Jun  1 13:33:26.375: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun  1 13:33:26.386: INFO: Number of nodes with available pods: 1
Jun  1 13:33:26.386: INFO: Node kind-worker is running more than one daemon pod
Jun  1 13:33:27.368: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun  1 13:33:27.372: INFO: Number of nodes with available pods: 1
Jun  1 13:33:27.372: INFO: Node kind-worker is running more than one daemon pod
Jun  1 13:33:28.369: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun  1 13:33:28.373: INFO: Number of nodes with available pods: 2
Jun  1 13:33:28.373: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4780, will wait for the garbage collector to delete the pods
Jun  1 13:33:28.440: INFO: Deleting DaemonSet.extensions daemon-set took: 8.224877ms
Jun  1 13:33:28.742: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.522177ms
... skipping 4 lines ...
Jun  1 13:33:33.155: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4780/pods","resourceVersion":"18260"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Jun  1 13:33:33.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4780" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":292,"completed":129,"skipped":2204,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 25 lines ...
  test/e2e/framework/framework.go:175
Jun  1 13:33:38.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1288" for this suite.
STEP: Destroying namespace "webhook-1288-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":292,"completed":130,"skipped":2205,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 10 lines ...
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
Jun  1 13:33:44.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3862" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":292,"completed":131,"skipped":2224,"failed":0}
SSS
------------------------------
[sig-network] Services 
  should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 71 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 13:34:03.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8697" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":292,"completed":132,"skipped":2227,"failed":0}
S
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 19 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Jun  1 13:34:19.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8701" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":292,"completed":133,"skipped":2228,"failed":0}
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 13:34:19.396: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ef80c29b-90e5-4797-873c-a1549834b275" in namespace "projected-9846" to be "Succeeded or Failed"
Jun  1 13:34:19.398: INFO: Pod "downwardapi-volume-ef80c29b-90e5-4797-873c-a1549834b275": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057128ms
Jun  1 13:34:21.402: INFO: Pod "downwardapi-volume-ef80c29b-90e5-4797-873c-a1549834b275": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006080443s
Jun  1 13:34:23.407: INFO: Pod "downwardapi-volume-ef80c29b-90e5-4797-873c-a1549834b275": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011425334s
STEP: Saw pod success
Jun  1 13:34:23.407: INFO: Pod "downwardapi-volume-ef80c29b-90e5-4797-873c-a1549834b275" satisfied condition "Succeeded or Failed"
Jun  1 13:34:23.411: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-ef80c29b-90e5-4797-873c-a1549834b275 container client-container: <nil>
STEP: delete the pod
Jun  1 13:34:23.429: INFO: Waiting for pod downwardapi-volume-ef80c29b-90e5-4797-873c-a1549834b275 to disappear
Jun  1 13:34:23.432: INFO: Pod downwardapi-volume-ef80c29b-90e5-4797-873c-a1549834b275 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Jun  1 13:34:23.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9846" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":292,"completed":134,"skipped":2229,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 19 lines ...
Jun  1 13:34:25.278: INFO: stderr: ""
Jun  1 13:34:25.279: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 13:34:25.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2597" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":292,"completed":135,"skipped":2248,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 13:34:25.340: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3d5da133-0f69-4694-9873-e2e6c410381f" in namespace "downward-api-8387" to be "Succeeded or Failed"
Jun  1 13:34:25.344: INFO: Pod "downwardapi-volume-3d5da133-0f69-4694-9873-e2e6c410381f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083066ms
Jun  1 13:34:27.353: INFO: Pod "downwardapi-volume-3d5da133-0f69-4694-9873-e2e6c410381f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013196957s
Jun  1 13:34:29.359: INFO: Pod "downwardapi-volume-3d5da133-0f69-4694-9873-e2e6c410381f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018927783s
STEP: Saw pod success
Jun  1 13:34:29.359: INFO: Pod "downwardapi-volume-3d5da133-0f69-4694-9873-e2e6c410381f" satisfied condition "Succeeded or Failed"
Jun  1 13:34:29.363: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-3d5da133-0f69-4694-9873-e2e6c410381f container client-container: <nil>
STEP: delete the pod
Jun  1 13:34:29.381: INFO: Waiting for pod downwardapi-volume-3d5da133-0f69-4694-9873-e2e6c410381f to disappear
Jun  1 13:34:29.386: INFO: Pod downwardapi-volume-3d5da133-0f69-4694-9873-e2e6c410381f no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Jun  1 13:34:29.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8387" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":136,"skipped":2255,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should patch a Namespace [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 9 lines ...
STEP: get the Namespace and ensuring it has the label
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  test/e2e/framework/framework.go:175
Jun  1 13:34:29.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-5688" for this suite.
STEP: Destroying namespace "nspatchtest-b6816f3f-2fe7-48f8-947a-ee6e3280f309-181" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":292,"completed":137,"skipped":2260,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
Jun  1 13:34:29.541: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test override arguments
Jun  1 13:34:29.581: INFO: Waiting up to 5m0s for pod "client-containers-bfc411e6-009d-48ac-b204-918bdf417cfe" in namespace "containers-4611" to be "Succeeded or Failed"
Jun  1 13:34:29.586: INFO: Pod "client-containers-bfc411e6-009d-48ac-b204-918bdf417cfe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.463544ms
Jun  1 13:34:31.592: INFO: Pod "client-containers-bfc411e6-009d-48ac-b204-918bdf417cfe": Phase="Running", Reason="", readiness=true. Elapsed: 2.010908516s
Jun  1 13:34:33.600: INFO: Pod "client-containers-bfc411e6-009d-48ac-b204-918bdf417cfe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018960879s
STEP: Saw pod success
Jun  1 13:34:33.600: INFO: Pod "client-containers-bfc411e6-009d-48ac-b204-918bdf417cfe" satisfied condition "Succeeded or Failed"
Jun  1 13:34:33.605: INFO: Trying to get logs from node kind-worker2 pod client-containers-bfc411e6-009d-48ac-b204-918bdf417cfe container test-container: <nil>
STEP: delete the pod
Jun  1 13:34:33.627: INFO: Waiting for pod client-containers-bfc411e6-009d-48ac-b204-918bdf417cfe to disappear
Jun  1 13:34:33.631: INFO: Pod client-containers-bfc411e6-009d-48ac-b204-918bdf417cfe no longer exists
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
Jun  1 13:34:33.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4611" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":292,"completed":138,"skipped":2275,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
Jun  1 13:34:37.692: INFO: Initial restart count of pod liveness-7f4c7401-b5b1-4eca-8363-4d7b5999d3ac is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Jun  1 13:38:38.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4497" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":292,"completed":139,"skipped":2289,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
Jun  1 13:38:38.538: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test override command
Jun  1 13:38:38.588: INFO: Waiting up to 5m0s for pod "client-containers-757c27ac-dc06-4bf7-874d-98a3cae919ae" in namespace "containers-9508" to be "Succeeded or Failed"
Jun  1 13:38:38.592: INFO: Pod "client-containers-757c27ac-dc06-4bf7-874d-98a3cae919ae": Phase="Pending", Reason="", readiness=false. Elapsed: 3.729232ms
Jun  1 13:38:40.598: INFO: Pod "client-containers-757c27ac-dc06-4bf7-874d-98a3cae919ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009384559s
Jun  1 13:38:42.605: INFO: Pod "client-containers-757c27ac-dc06-4bf7-874d-98a3cae919ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017178924s
STEP: Saw pod success
Jun  1 13:38:42.605: INFO: Pod "client-containers-757c27ac-dc06-4bf7-874d-98a3cae919ae" satisfied condition "Succeeded or Failed"
Jun  1 13:38:42.610: INFO: Trying to get logs from node kind-worker2 pod client-containers-757c27ac-dc06-4bf7-874d-98a3cae919ae container test-container: <nil>
STEP: delete the pod
Jun  1 13:38:42.646: INFO: Waiting for pod client-containers-757c27ac-dc06-4bf7-874d-98a3cae919ae to disappear
Jun  1 13:38:42.656: INFO: Pod client-containers-757c27ac-dc06-4bf7-874d-98a3cae919ae no longer exists
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
Jun  1 13:38:42.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9508" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":292,"completed":140,"skipped":2310,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Job
... skipping 12 lines ...
Jun  1 13:38:47.106: INFO: Terminating Job.batch foo pods took: 301.430559ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:175
Jun  1 13:39:23.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-7997" for this suite.
•{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":292,"completed":141,"skipped":2332,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 24 lines ...
  test/e2e/framework/framework.go:175
Jun  1 13:39:28.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5132" for this suite.
STEP: Destroying namespace "webhook-5132-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":292,"completed":142,"skipped":2338,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-1e32967e-3c0e-4c2b-84c7-5091e40b11eb
STEP: Creating a pod to test consume secrets
Jun  1 13:39:28.921: INFO: Waiting up to 5m0s for pod "pod-secrets-c9289393-aa4f-4771-b19e-33cdf9dbedab" in namespace "secrets-4999" to be "Succeeded or Failed"
Jun  1 13:39:28.925: INFO: Pod "pod-secrets-c9289393-aa4f-4771-b19e-33cdf9dbedab": Phase="Pending", Reason="", readiness=false. Elapsed: 3.352902ms
Jun  1 13:39:30.929: INFO: Pod "pod-secrets-c9289393-aa4f-4771-b19e-33cdf9dbedab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007779386s
Jun  1 13:39:32.935: INFO: Pod "pod-secrets-c9289393-aa4f-4771-b19e-33cdf9dbedab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01388118s
STEP: Saw pod success
Jun  1 13:39:32.935: INFO: Pod "pod-secrets-c9289393-aa4f-4771-b19e-33cdf9dbedab" satisfied condition "Succeeded or Failed"
Jun  1 13:39:32.941: INFO: Trying to get logs from node kind-worker pod pod-secrets-c9289393-aa4f-4771-b19e-33cdf9dbedab container secret-volume-test: <nil>
STEP: delete the pod
Jun  1 13:39:32.974: INFO: Waiting for pod pod-secrets-c9289393-aa4f-4771-b19e-33cdf9dbedab to disappear
Jun  1 13:39:32.977: INFO: Pod pod-secrets-c9289393-aa4f-4771-b19e-33cdf9dbedab no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Jun  1 13:39:32.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4999" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":143,"skipped":2372,"failed":0}

------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 28 lines ...
Jun  1 13:39:37.108: INFO: Selector matched 1 pods for map[app:agnhost]
Jun  1 13:39:37.108: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 13:39:37.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2532" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":292,"completed":144,"skipped":2372,"failed":0}
SSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 10 lines ...
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Jun  1 13:39:41.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9880" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":292,"completed":145,"skipped":2375,"failed":0}

------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 8 lines ...
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  test/e2e/framework/framework.go:175
Jun  1 13:39:43.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-9960" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":292,"completed":146,"skipped":2375,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating secret secrets-1461/secret-test-6af75d09-2518-4307-a4cb-63f1da415fa5
STEP: Creating a pod to test consume secrets
Jun  1 13:39:43.532: INFO: Waiting up to 5m0s for pod "pod-configmaps-74e42f19-113b-4f8c-99e3-f67f576d5886" in namespace "secrets-1461" to be "Succeeded or Failed"
Jun  1 13:39:43.536: INFO: Pod "pod-configmaps-74e42f19-113b-4f8c-99e3-f67f576d5886": Phase="Pending", Reason="", readiness=false. Elapsed: 3.640672ms
Jun  1 13:39:45.543: INFO: Pod "pod-configmaps-74e42f19-113b-4f8c-99e3-f67f576d5886": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011159445s
STEP: Saw pod success
Jun  1 13:39:45.543: INFO: Pod "pod-configmaps-74e42f19-113b-4f8c-99e3-f67f576d5886" satisfied condition "Succeeded or Failed"
Jun  1 13:39:45.548: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-74e42f19-113b-4f8c-99e3-f67f576d5886 container env-test: <nil>
STEP: delete the pod
Jun  1 13:39:45.575: INFO: Waiting for pod pod-configmaps-74e42f19-113b-4f8c-99e3-f67f576d5886 to disappear
Jun  1 13:39:45.580: INFO: Pod pod-configmaps-74e42f19-113b-4f8c-99e3-f67f576d5886 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
Jun  1 13:39:45.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1461" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":292,"completed":147,"skipped":2404,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
Jun  1 13:39:49.714: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:39:49.720: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:39:49.748: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:39:49.756: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:39:49.763: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:39:49.769: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:39:49.781: INFO: Lookups using dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5209.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5209.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local jessie_udp@dns-test-service-2.dns-5209.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5209.svc.cluster.local]

Jun  1 13:39:54.788: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:39:54.795: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:39:54.801: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:39:54.809: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:39:54.824: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:39:54.829: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:39:54.834: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:39:54.839: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:39:54.848: INFO: Lookups using dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5209.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5209.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local jessie_udp@dns-test-service-2.dns-5209.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5209.svc.cluster.local]

Jun  1 13:39:59.787: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:39:59.798: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:39:59.804: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:39:59.807: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:39:59.818: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:39:59.823: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:39:59.827: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:39:59.830: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:39:59.836: INFO: Lookups using dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5209.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5209.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local jessie_udp@dns-test-service-2.dns-5209.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5209.svc.cluster.local]

Jun  1 13:40:04.786: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:40:04.791: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:40:04.796: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:40:04.800: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:40:04.820: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:40:04.824: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:40:04.828: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:40:04.832: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:40:04.839: INFO: Lookups using dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5209.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5209.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local jessie_udp@dns-test-service-2.dns-5209.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5209.svc.cluster.local]

Jun  1 13:40:09.790: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:40:09.797: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:40:09.802: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:40:09.807: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:40:09.817: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:40:09.821: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:40:09.824: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:40:09.828: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:40:09.834: INFO: Lookups using dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5209.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5209.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local jessie_udp@dns-test-service-2.dns-5209.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5209.svc.cluster.local]

Jun  1 13:40:14.786: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:40:14.790: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:40:14.794: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:40:14.798: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:40:14.810: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:40:14.813: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:40:14.816: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:40:14.820: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5209.svc.cluster.local from pod dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175: the server could not find the requested resource (get pods dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175)
Jun  1 13:40:14.828: INFO: Lookups using dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5209.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5209.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5209.svc.cluster.local jessie_udp@dns-test-service-2.dns-5209.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5209.svc.cluster.local]

Jun  1 13:40:19.837: INFO: DNS probes using dns-5209/dns-test-5f9cac3f-9f36-43f5-8aaf-264a32685175 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Jun  1 13:40:19.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5209" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":292,"completed":148,"skipped":2421,"failed":0}
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 12 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-1127
STEP: Creating statefulset with conflicting port in namespace statefulset-1127
STEP: Waiting until pod test-pod will start running in namespace statefulset-1127
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1127
Jun  1 13:40:23.997: INFO: Observed stateful pod in namespace: statefulset-1127, name: ss-0, uid: a5735558-52dc-4679-bfb7-ec38d9a45e8a, status phase: Pending. Waiting for statefulset controller to delete.
Jun  1 13:40:24.786: INFO: Observed stateful pod in namespace: statefulset-1127, name: ss-0, uid: a5735558-52dc-4679-bfb7-ec38d9a45e8a, status phase: Failed. Waiting for statefulset controller to delete.
Jun  1 13:40:24.796: INFO: Observed stateful pod in namespace: statefulset-1127, name: ss-0, uid: a5735558-52dc-4679-bfb7-ec38d9a45e8a, status phase: Failed. Waiting for statefulset controller to delete.
Jun  1 13:40:24.799: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1127
STEP: Removing pod with conflicting port in namespace statefulset-1127
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1127 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/apps/statefulset.go:114
Jun  1 13:40:28.831: INFO: Deleting all statefulset in ns statefulset-1127
Jun  1 13:40:28.836: INFO: Scaling statefulset ss to 0
Jun  1 13:40:48.868: INFO: Waiting for statefulset status.replicas updated to 0
Jun  1 13:40:48.872: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Jun  1 13:40:48.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1127" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":292,"completed":149,"skipped":2424,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 13:40:48.952: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9f404a92-51b3-46f0-ba2f-2cbb71b489c7" in namespace "projected-1814" to be "Succeeded or Failed"
Jun  1 13:40:48.960: INFO: Pod "downwardapi-volume-9f404a92-51b3-46f0-ba2f-2cbb71b489c7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.313807ms
Jun  1 13:40:50.965: INFO: Pod "downwardapi-volume-9f404a92-51b3-46f0-ba2f-2cbb71b489c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0137119s
Jun  1 13:40:52.972: INFO: Pod "downwardapi-volume-9f404a92-51b3-46f0-ba2f-2cbb71b489c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020568244s
STEP: Saw pod success
Jun  1 13:40:52.972: INFO: Pod "downwardapi-volume-9f404a92-51b3-46f0-ba2f-2cbb71b489c7" satisfied condition "Succeeded or Failed"
Jun  1 13:40:52.975: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-9f404a92-51b3-46f0-ba2f-2cbb71b489c7 container client-container: <nil>
STEP: delete the pod
Jun  1 13:40:52.992: INFO: Waiting for pod downwardapi-volume-9f404a92-51b3-46f0-ba2f-2cbb71b489c7 to disappear
Jun  1 13:40:52.996: INFO: Pod downwardapi-volume-9f404a92-51b3-46f0-ba2f-2cbb71b489c7 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Jun  1 13:40:52.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1814" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":150,"skipped":2458,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 13:40:53.004: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on node default medium
Jun  1 13:40:53.039: INFO: Waiting up to 5m0s for pod "pod-c9501623-8cf4-4e8c-91c5-2970a7768666" in namespace "emptydir-6476" to be "Succeeded or Failed"
Jun  1 13:40:53.042: INFO: Pod "pod-c9501623-8cf4-4e8c-91c5-2970a7768666": Phase="Pending", Reason="", readiness=false. Elapsed: 2.653123ms
Jun  1 13:40:55.047: INFO: Pod "pod-c9501623-8cf4-4e8c-91c5-2970a7768666": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007867847s
Jun  1 13:40:57.052: INFO: Pod "pod-c9501623-8cf4-4e8c-91c5-2970a7768666": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013194924s
STEP: Saw pod success
Jun  1 13:40:57.052: INFO: Pod "pod-c9501623-8cf4-4e8c-91c5-2970a7768666" satisfied condition "Succeeded or Failed"
Jun  1 13:40:57.056: INFO: Trying to get logs from node kind-worker2 pod pod-c9501623-8cf4-4e8c-91c5-2970a7768666 container test-container: <nil>
STEP: delete the pod
Jun  1 13:40:57.077: INFO: Waiting for pod pod-c9501623-8cf4-4e8c-91c5-2970a7768666 to disappear
Jun  1 13:40:57.083: INFO: Pod pod-c9501623-8cf4-4e8c-91c5-2970a7768666 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 13:40:57.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6476" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":151,"skipped":2463,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 44 lines ...
Jun  1 13:41:37.324: INFO: Deleting pod "simpletest.rc-xgxt7" in namespace "gc-3381"
Jun  1 13:41:37.345: INFO: Deleting pod "simpletest.rc-zlr8x" in namespace "gc-3381"
[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Jun  1 13:41:37.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3381" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":292,"completed":152,"skipped":2467,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 12 lines ...
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 13:41:37.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-1930" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":292,"completed":153,"skipped":2499,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Jun  1 13:41:37.516: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test env composition
Jun  1 13:41:37.576: INFO: Waiting up to 5m0s for pod "var-expansion-69fb79ef-39d6-424d-be3e-6a5e337a930a" in namespace "var-expansion-4693" to be "Succeeded or Failed"
Jun  1 13:41:37.584: INFO: Pod "var-expansion-69fb79ef-39d6-424d-be3e-6a5e337a930a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.766581ms
Jun  1 13:41:39.589: INFO: Pod "var-expansion-69fb79ef-39d6-424d-be3e-6a5e337a930a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012975944s
Jun  1 13:41:41.594: INFO: Pod "var-expansion-69fb79ef-39d6-424d-be3e-6a5e337a930a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017828259s
STEP: Saw pod success
Jun  1 13:41:41.594: INFO: Pod "var-expansion-69fb79ef-39d6-424d-be3e-6a5e337a930a" satisfied condition "Succeeded or Failed"
Jun  1 13:41:41.599: INFO: Trying to get logs from node kind-worker2 pod var-expansion-69fb79ef-39d6-424d-be3e-6a5e337a930a container dapi-container: <nil>
STEP: delete the pod
Jun  1 13:41:41.619: INFO: Waiting for pod var-expansion-69fb79ef-39d6-424d-be3e-6a5e337a930a to disappear
Jun  1 13:41:41.622: INFO: Pod var-expansion-69fb79ef-39d6-424d-be3e-6a5e337a930a no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Jun  1 13:41:41.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4693" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":292,"completed":154,"skipped":2510,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 12 lines ...
STEP: Creating secret with name s-test-opt-create-d2aa13da-59ba-472d-823a-bfe05febda43
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Jun  1 13:41:47.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9138" for this suite.
•{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":155,"skipped":2537,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
Jun  1 13:41:50.888: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Jun  1 13:41:50.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-143" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":292,"completed":156,"skipped":2554,"failed":0}
SS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 9 lines ...
STEP: creating pod
Jun  1 13:41:54.988: INFO: Pod pod-hostip-a81a4aff-c522-4cfa-8f0c-1601a2e24476 has hostIP: 172.18.0.3
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Jun  1 13:41:54.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1156" for this suite.
•{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":292,"completed":157,"skipped":2556,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 8 lines ...
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Jun  1 13:42:02.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-527" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":292,"completed":158,"skipped":2568,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 16 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Jun  1 13:42:15.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7078" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":292,"completed":159,"skipped":2575,"failed":0}
SSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected combined
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-projected-all-test-volume-2fbef240-e84d-489c-995d-a2df640725b1
STEP: Creating secret with name secret-projected-all-test-volume-787cad84-ee22-49d9-b062-0d6da4d96e96
STEP: Creating a pod to test Check all projections for projected volume plugin
Jun  1 13:42:15.264: INFO: Waiting up to 5m0s for pod "projected-volume-5916baa9-f68f-4fe0-8665-e3389a6f642e" in namespace "projected-8592" to be "Succeeded or Failed"
Jun  1 13:42:15.267: INFO: Pod "projected-volume-5916baa9-f68f-4fe0-8665-e3389a6f642e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.458664ms
Jun  1 13:42:17.272: INFO: Pod "projected-volume-5916baa9-f68f-4fe0-8665-e3389a6f642e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008512265s
Jun  1 13:42:19.277: INFO: Pod "projected-volume-5916baa9-f68f-4fe0-8665-e3389a6f642e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013176183s
STEP: Saw pod success
Jun  1 13:42:19.277: INFO: Pod "projected-volume-5916baa9-f68f-4fe0-8665-e3389a6f642e" satisfied condition "Succeeded or Failed"
Jun  1 13:42:19.281: INFO: Trying to get logs from node kind-worker pod projected-volume-5916baa9-f68f-4fe0-8665-e3389a6f642e container projected-all-volume-test: <nil>
STEP: delete the pod
Jun  1 13:42:19.308: INFO: Waiting for pod projected-volume-5916baa9-f68f-4fe0-8665-e3389a6f642e to disappear
Jun  1 13:42:19.313: INFO: Pod projected-volume-5916baa9-f68f-4fe0-8665-e3389a6f642e no longer exists
[AfterEach] [sig-storage] Projected combined
  test/e2e/framework/framework.go:175
Jun  1 13:42:19.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8592" for this suite.
•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":292,"completed":160,"skipped":2579,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 26 lines ...
Jun  1 13:43:09.444: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-4588 /api/v1/namespaces/watch-4588/configmaps/e2e-watch-test-configmap-b c4d971fe-bd28-4eea-8c46-d926212fd1d2 21329 0 2020-06-01 13:42:59 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-06-01 13:42:59 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Jun  1 13:43:09.444: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-4588 /api/v1/namespaces/watch-4588/configmaps/e2e-watch-test-configmap-b c4d971fe-bd28-4eea-8c46-d926212fd1d2 21329 0 2020-06-01 13:42:59 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-06-01 13:42:59 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Jun  1 13:43:19.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4588" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":292,"completed":161,"skipped":2586,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
Jun  1 13:43:21.517: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Jun  1 13:43:21.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5606" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":292,"completed":162,"skipped":2600,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 36 lines ...
Jun  1 13:43:28.683: INFO: stdout: "service/rm3 exposed\n"
Jun  1 13:43:28.695: INFO: Service rm3 in namespace kubectl-826 found.
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 13:43:30.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-826" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":292,"completed":163,"skipped":2619,"failed":0}
S
------------------------------
[sig-network] Services 
  should find a service from listing all namespaces [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 10 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 13:43:30.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4829" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":292,"completed":164,"skipped":2620,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name projected-secret-test-772aa212-a122-4b81-b1b4-58afea66d461
STEP: Creating a pod to test consume secrets
Jun  1 13:43:30.796: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e3c97d72-b32e-482b-ae89-95dc3e97fef4" in namespace "projected-1139" to be "Succeeded or Failed"
Jun  1 13:43:30.799: INFO: Pod "pod-projected-secrets-e3c97d72-b32e-482b-ae89-95dc3e97fef4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.996754ms
Jun  1 13:43:32.804: INFO: Pod "pod-projected-secrets-e3c97d72-b32e-482b-ae89-95dc3e97fef4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007880189s
Jun  1 13:43:34.810: INFO: Pod "pod-projected-secrets-e3c97d72-b32e-482b-ae89-95dc3e97fef4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013526651s
STEP: Saw pod success
Jun  1 13:43:34.810: INFO: Pod "pod-projected-secrets-e3c97d72-b32e-482b-ae89-95dc3e97fef4" satisfied condition "Succeeded or Failed"
Jun  1 13:43:34.814: INFO: Trying to get logs from node kind-worker2 pod pod-projected-secrets-e3c97d72-b32e-482b-ae89-95dc3e97fef4 container secret-volume-test: <nil>
STEP: delete the pod
Jun  1 13:43:34.849: INFO: Waiting for pod pod-projected-secrets-e3c97d72-b32e-482b-ae89-95dc3e97fef4 to disappear
Jun  1 13:43:34.852: INFO: Pod pod-projected-secrets-e3c97d72-b32e-482b-ae89-95dc3e97fef4 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Jun  1 13:43:34.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1139" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":292,"completed":165,"skipped":2630,"failed":0}
SSSS
------------------------------
[k8s.io] Variable Expansion 
  should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  1 13:43:34.860: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating the pod with failed condition
STEP: updating the pod
Jun  1 13:45:35.429: INFO: Successfully updated pod "var-expansion-ff7aa519-52e7-4c4d-819b-101904cf2670"
STEP: waiting for pod running
STEP: deleting the pod gracefully
Jun  1 13:45:37.446: INFO: Deleting pod "var-expansion-ff7aa519-52e7-4c4d-819b-101904cf2670" in namespace "var-expansion-5582"
Jun  1 13:45:37.452: INFO: Wait up to 5m0s for pod "var-expansion-ff7aa519-52e7-4c4d-819b-101904cf2670" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Jun  1 13:46:13.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5582" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":292,"completed":166,"skipped":2634,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] PodTemplates 
  should run the lifecycle of PodTemplates [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] PodTemplates
... skipping 5 lines ...
[It] should run the lifecycle of PodTemplates [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [sig-node] PodTemplates
  test/e2e/framework/framework.go:175
Jun  1 13:46:13.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-4814" for this suite.
•{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":292,"completed":167,"skipped":2662,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 13:46:13.573: INFO: Waiting up to 5m0s for pod "downwardapi-volume-65903f9e-ffce-4cc9-a321-0389e69c7a74" in namespace "downward-api-391" to be "Succeeded or Failed"
Jun  1 13:46:13.576: INFO: Pod "downwardapi-volume-65903f9e-ffce-4cc9-a321-0389e69c7a74": Phase="Pending", Reason="", readiness=false. Elapsed: 3.173303ms
Jun  1 13:46:15.582: INFO: Pod "downwardapi-volume-65903f9e-ffce-4cc9-a321-0389e69c7a74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008662335s
Jun  1 13:46:17.593: INFO: Pod "downwardapi-volume-65903f9e-ffce-4cc9-a321-0389e69c7a74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019263553s
STEP: Saw pod success
Jun  1 13:46:17.593: INFO: Pod "downwardapi-volume-65903f9e-ffce-4cc9-a321-0389e69c7a74" satisfied condition "Succeeded or Failed"
Jun  1 13:46:17.602: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-65903f9e-ffce-4cc9-a321-0389e69c7a74 container client-container: <nil>
STEP: delete the pod
Jun  1 13:46:17.642: INFO: Waiting for pod downwardapi-volume-65903f9e-ffce-4cc9-a321-0389e69c7a74 to disappear
Jun  1 13:46:17.649: INFO: Pod downwardapi-volume-65903f9e-ffce-4cc9-a321-0389e69c7a74 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Jun  1 13:46:17.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-391" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":168,"skipped":2702,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 13 lines ...
Jun  1 13:46:43.804: INFO: Restart count of pod container-probe-6419/liveness-4370e615-f335-4696-ad58-250a5722d230 is now 1 (24.074639975s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Jun  1 13:46:43.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6419" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":292,"completed":169,"skipped":2756,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 19 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Jun  1 13:46:59.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7367" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":292,"completed":170,"skipped":2764,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 28 lines ...
Jun  1 13:47:13.147: INFO: stderr: ""
Jun  1 13:47:13.147: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 13:47:13.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5474" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":292,"completed":171,"skipped":2780,"failed":0}

------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 17 lines ...
Jun  1 13:47:17.465: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jun  1 13:47:17.675: INFO: Deleting pod dns-2509...
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Jun  1 13:47:17.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2509" for this suite.
•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":292,"completed":172,"skipped":2780,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 13:47:17.754: INFO: Waiting up to 5m0s for pod "downwardapi-volume-941275bd-78bf-4700-b5f5-9bb730be2d4e" in namespace "projected-6628" to be "Succeeded or Failed"
Jun  1 13:47:17.757: INFO: Pod "downwardapi-volume-941275bd-78bf-4700-b5f5-9bb730be2d4e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.470593ms
Jun  1 13:47:19.765: INFO: Pod "downwardapi-volume-941275bd-78bf-4700-b5f5-9bb730be2d4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010735893s
Jun  1 13:47:21.769: INFO: Pod "downwardapi-volume-941275bd-78bf-4700-b5f5-9bb730be2d4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015138873s
STEP: Saw pod success
Jun  1 13:47:21.769: INFO: Pod "downwardapi-volume-941275bd-78bf-4700-b5f5-9bb730be2d4e" satisfied condition "Succeeded or Failed"
Jun  1 13:47:21.773: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-941275bd-78bf-4700-b5f5-9bb730be2d4e container client-container: <nil>
STEP: delete the pod
Jun  1 13:47:21.803: INFO: Waiting for pod downwardapi-volume-941275bd-78bf-4700-b5f5-9bb730be2d4e to disappear
Jun  1 13:47:21.807: INFO: Pod downwardapi-volume-941275bd-78bf-4700-b5f5-9bb730be2d4e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Jun  1 13:47:21.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6628" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":292,"completed":173,"skipped":2801,"failed":0}
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-4018bd37-a16a-401d-899d-d14d8470ed24
STEP: Creating a pod to test consume configMaps
Jun  1 13:47:21.857: INFO: Waiting up to 5m0s for pod "pod-configmaps-d4e64f3c-6234-49ec-b793-9834e93ca0d4" in namespace "configmap-2031" to be "Succeeded or Failed"
Jun  1 13:47:21.860: INFO: Pod "pod-configmaps-d4e64f3c-6234-49ec-b793-9834e93ca0d4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.140774ms
Jun  1 13:47:23.865: INFO: Pod "pod-configmaps-d4e64f3c-6234-49ec-b793-9834e93ca0d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008050766s
Jun  1 13:47:25.873: INFO: Pod "pod-configmaps-d4e64f3c-6234-49ec-b793-9834e93ca0d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015594503s
STEP: Saw pod success
Jun  1 13:47:25.873: INFO: Pod "pod-configmaps-d4e64f3c-6234-49ec-b793-9834e93ca0d4" satisfied condition "Succeeded or Failed"
Jun  1 13:47:25.876: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-d4e64f3c-6234-49ec-b793-9834e93ca0d4 container configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 13:47:25.899: INFO: Waiting for pod pod-configmaps-d4e64f3c-6234-49ec-b793-9834e93ca0d4 to disappear
Jun  1 13:47:25.904: INFO: Pod pod-configmaps-d4e64f3c-6234-49ec-b793-9834e93ca0d4 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 13:47:25.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2031" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":174,"skipped":2807,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should run through the lifecycle of a ServiceAccount [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 10 lines ...
STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector)
STEP: deleting the ServiceAccount
[AfterEach] [sig-auth] ServiceAccounts
  test/e2e/framework/framework.go:175
Jun  1 13:47:25.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-8814" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":292,"completed":175,"skipped":2823,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-df2bb72a-8701-45f8-b11f-dfaf2c633257
STEP: Creating a pod to test consume configMaps
Jun  1 13:47:26.046: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-be9c6b01-0d3d-4b27-bb98-65b38e4727df" in namespace "projected-8991" to be "Succeeded or Failed"
Jun  1 13:47:26.052: INFO: Pod "pod-projected-configmaps-be9c6b01-0d3d-4b27-bb98-65b38e4727df": Phase="Pending", Reason="", readiness=false. Elapsed: 6.237825ms
Jun  1 13:47:28.059: INFO: Pod "pod-projected-configmaps-be9c6b01-0d3d-4b27-bb98-65b38e4727df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013163308s
Jun  1 13:47:30.064: INFO: Pod "pod-projected-configmaps-be9c6b01-0d3d-4b27-bb98-65b38e4727df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017951897s
STEP: Saw pod success
Jun  1 13:47:30.064: INFO: Pod "pod-projected-configmaps-be9c6b01-0d3d-4b27-bb98-65b38e4727df" satisfied condition "Succeeded or Failed"
Jun  1 13:47:30.067: INFO: Trying to get logs from node kind-worker2 pod pod-projected-configmaps-be9c6b01-0d3d-4b27-bb98-65b38e4727df container projected-configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 13:47:30.089: INFO: Waiting for pod pod-projected-configmaps-be9c6b01-0d3d-4b27-bb98-65b38e4727df to disappear
Jun  1 13:47:30.093: INFO: Pod pod-projected-configmaps-be9c6b01-0d3d-4b27-bb98-65b38e4727df no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Jun  1 13:47:30.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8991" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":176,"skipped":2870,"failed":0}
SSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 13:47:34.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8727" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":177,"skipped":2874,"failed":0}
SSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Job
... skipping 21 lines ...
Jun  1 13:47:43.342: INFO: Pod "adopt-release-j9tkm": Phase="Running", Reason="", readiness=true. Elapsed: 2.009590989s
Jun  1 13:47:43.342: INFO: Pod "adopt-release-j9tkm" satisfied condition "released"
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:175
Jun  1 13:47:43.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5426" for this suite.
•{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":292,"completed":178,"skipped":2880,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
... skipping 9 lines ...
STEP: creating the pod
Jun  1 13:47:43.392: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
Jun  1 13:47:53.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8169" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":292,"completed":179,"skipped":2910,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Jun  1 13:47:53.644: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-e8186f13-a618-4882-aac5-0a80dcfbd8e7" in namespace "security-context-test-7605" to be "Succeeded or Failed"
Jun  1 13:47:53.647: INFO: Pod "busybox-readonly-false-e8186f13-a618-4882-aac5-0a80dcfbd8e7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.676831ms
Jun  1 13:47:55.655: INFO: Pod "busybox-readonly-false-e8186f13-a618-4882-aac5-0a80dcfbd8e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011588962s
Jun  1 13:47:57.661: INFO: Pod "busybox-readonly-false-e8186f13-a618-4882-aac5-0a80dcfbd8e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017160866s
Jun  1 13:47:57.661: INFO: Pod "busybox-readonly-false-e8186f13-a618-4882-aac5-0a80dcfbd8e7" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Jun  1 13:47:57.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7605" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":292,"completed":180,"skipped":2926,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  1 13:47:57.678: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name secret-emptykey-test-b3a2dcfe-41aa-443f-8ef4-511fa42812d2
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
Jun  1 13:47:57.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-178" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":292,"completed":181,"skipped":2935,"failed":0}
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-d35b9c7b-aac0-4a73-ae5c-e843486303ea
STEP: Creating a pod to test consume configMaps
Jun  1 13:47:57.773: INFO: Waiting up to 5m0s for pod "pod-configmaps-de8c4f69-34ca-435a-989c-ffc2a4fdb5e4" in namespace "configmap-546" to be "Succeeded or Failed"
Jun  1 13:47:57.777: INFO: Pod "pod-configmaps-de8c4f69-34ca-435a-989c-ffc2a4fdb5e4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.785692ms
Jun  1 13:47:59.788: INFO: Pod "pod-configmaps-de8c4f69-34ca-435a-989c-ffc2a4fdb5e4": Phase="Running", Reason="", readiness=true. Elapsed: 2.014639518s
Jun  1 13:48:01.795: INFO: Pod "pod-configmaps-de8c4f69-34ca-435a-989c-ffc2a4fdb5e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021986559s
STEP: Saw pod success
Jun  1 13:48:01.795: INFO: Pod "pod-configmaps-de8c4f69-34ca-435a-989c-ffc2a4fdb5e4" satisfied condition "Succeeded or Failed"
Jun  1 13:48:01.800: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-de8c4f69-34ca-435a-989c-ffc2a4fdb5e4 container configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 13:48:01.821: INFO: Waiting for pod pod-configmaps-de8c4f69-34ca-435a-989c-ffc2a4fdb5e4 to disappear
Jun  1 13:48:01.825: INFO: Pod pod-configmaps-de8c4f69-34ca-435a-989c-ffc2a4fdb5e4 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 13:48:01.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-546" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":292,"completed":182,"skipped":2937,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Jun  1 13:48:12.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1499" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":292,"completed":183,"skipped":3003,"failed":0}
SSSS
------------------------------
[sig-network] Services 
  should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 71 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 13:49:33.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4845" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":292,"completed":184,"skipped":3007,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Jun  1 13:49:33.310: INFO: Asynchronously running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl kubectl --server=https://127.0.0.1:41191 --kubeconfig=/root/.kube/kind-test-config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 13:49:33.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8922" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":292,"completed":185,"skipped":3055,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-d2d9a145-42bf-41a6-aa03-45a4327fa472
STEP: Creating a pod to test consume configMaps
Jun  1 13:49:33.596: INFO: Waiting up to 5m0s for pod "pod-configmaps-ffb840b3-3571-4486-87c7-38e751d0492d" in namespace "configmap-7063" to be "Succeeded or Failed"
Jun  1 13:49:33.598: INFO: Pod "pod-configmaps-ffb840b3-3571-4486-87c7-38e751d0492d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.315434ms
Jun  1 13:49:35.604: INFO: Pod "pod-configmaps-ffb840b3-3571-4486-87c7-38e751d0492d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008201763s
STEP: Saw pod success
Jun  1 13:49:35.604: INFO: Pod "pod-configmaps-ffb840b3-3571-4486-87c7-38e751d0492d" satisfied condition "Succeeded or Failed"
Jun  1 13:49:35.608: INFO: Trying to get logs from node kind-worker pod pod-configmaps-ffb840b3-3571-4486-87c7-38e751d0492d container configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 13:49:35.637: INFO: Waiting for pod pod-configmaps-ffb840b3-3571-4486-87c7-38e751d0492d to disappear
Jun  1 13:49:35.640: INFO: Pod pod-configmaps-ffb840b3-3571-4486-87c7-38e751d0492d no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 13:49:35.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7063" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":292,"completed":186,"skipped":3075,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 33 lines ...

W0601 13:49:45.714361   11738 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Jun  1 13:49:45.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9920" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":292,"completed":187,"skipped":3100,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-map-236c2641-1a32-41e3-afc9-f3b2a82c220f
STEP: Creating a pod to test consume secrets
Jun  1 13:49:45.770: INFO: Waiting up to 5m0s for pod "pod-secrets-e5c64492-aad6-4d91-bea1-462e5a2d42f2" in namespace "secrets-101" to be "Succeeded or Failed"
Jun  1 13:49:45.773: INFO: Pod "pod-secrets-e5c64492-aad6-4d91-bea1-462e5a2d42f2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.30192ms
Jun  1 13:49:47.777: INFO: Pod "pod-secrets-e5c64492-aad6-4d91-bea1-462e5a2d42f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007557269s
STEP: Saw pod success
Jun  1 13:49:47.777: INFO: Pod "pod-secrets-e5c64492-aad6-4d91-bea1-462e5a2d42f2" satisfied condition "Succeeded or Failed"
Jun  1 13:49:47.781: INFO: Trying to get logs from node kind-worker pod pod-secrets-e5c64492-aad6-4d91-bea1-462e5a2d42f2 container secret-volume-test: <nil>
STEP: delete the pod
Jun  1 13:49:47.802: INFO: Waiting for pod pod-secrets-e5c64492-aad6-4d91-bea1-462e5a2d42f2 to disappear
Jun  1 13:49:47.805: INFO: Pod pod-secrets-e5c64492-aad6-4d91-bea1-462e5a2d42f2 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Jun  1 13:49:47.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-101" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":188,"skipped":3110,"failed":0}

------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 11 lines ...
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Jun  1 13:49:47.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1269" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":292,"completed":189,"skipped":3110,"failed":0}
S
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 14 lines ...
STEP: verifying the updated pod is in kubernetes
Jun  1 13:49:52.476: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Jun  1 13:49:52.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4306" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":292,"completed":190,"skipped":3111,"failed":0}

------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  1 13:49:52.485: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:162
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating the pod
Jun  1 13:49:52.524: INFO: PodSpec: initContainers in spec.initContainers
Jun  1 13:50:46.940: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-3e765524-9e8c-48e0-9d1e-68fa838f834e", GenerateName:"", Namespace:"init-container-6168", SelfLink:"/api/v1/namespaces/init-container-6168/pods/pod-init-3e765524-9e8c-48e0-9d1e-68fa838f834e", UID:"0d983c8b-eadb-42d2-b046-bd005fc9bb31", ResourceVersion:"23619", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63726616192, loc:(*time.Location)(0x8006d20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"524838453"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0036076a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0036076c0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0036076e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003607700)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-8mjmd", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc005eeb340), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8mjmd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8mjmd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8mjmd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0034bafb8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kind-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0025f1880), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0034bb040)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0034bb060)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0034bb068), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0034bb06c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616192, loc:(*time.Location)(0x8006d20)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616192, loc:(*time.Location)(0x8006d20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616192, loc:(*time.Location)(0x8006d20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616192, loc:(*time.Location)(0x8006d20)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.2", PodIP:"10.244.1.137", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.137"}}, StartTime:(*v1.Time)(0xc003607720), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0025f1960)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0025f19d0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://2cf5f95441c6201e2d44fa1d659773118b6e6c816463253049090477d8f9c6d2", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003607760), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003607740), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc0034bb0ff)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
Jun  1 13:50:46.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6168" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":292,"completed":191,"skipped":3111,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 13:50:46.950: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on node default medium
Jun  1 13:50:46.986: INFO: Waiting up to 5m0s for pod "pod-5153714b-49c9-4018-b18b-8b788b0cc1d7" in namespace "emptydir-9172" to be "Succeeded or Failed"
Jun  1 13:50:46.989: INFO: Pod "pod-5153714b-49c9-4018-b18b-8b788b0cc1d7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.08469ms
Jun  1 13:50:49.000: INFO: Pod "pod-5153714b-49c9-4018-b18b-8b788b0cc1d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014074172s
Jun  1 13:50:51.006: INFO: Pod "pod-5153714b-49c9-4018-b18b-8b788b0cc1d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019812978s
STEP: Saw pod success
Jun  1 13:50:51.006: INFO: Pod "pod-5153714b-49c9-4018-b18b-8b788b0cc1d7" satisfied condition "Succeeded or Failed"
Jun  1 13:50:51.010: INFO: Trying to get logs from node kind-worker2 pod pod-5153714b-49c9-4018-b18b-8b788b0cc1d7 container test-container: <nil>
STEP: delete the pod
Jun  1 13:50:51.036: INFO: Waiting for pod pod-5153714b-49c9-4018-b18b-8b788b0cc1d7 to disappear
Jun  1 13:50:51.039: INFO: Pod pod-5153714b-49c9-4018-b18b-8b788b0cc1d7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 13:50:51.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9172" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":192,"skipped":3112,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Jun  1 13:51:07.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8553" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":292,"completed":193,"skipped":3133,"failed":0}
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 23 lines ...
Jun  1 13:51:23.261: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
Jun  1 13:51:23.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8321" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":292,"completed":194,"skipped":3135,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 24 lines ...
  test/e2e/framework/framework.go:175
Jun  1 13:51:33.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-625" for this suite.
STEP: Destroying namespace "webhook-625-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":292,"completed":195,"skipped":3145,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 13:51:33.375: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jun  1 13:51:33.427: INFO: Waiting up to 5m0s for pod "pod-07ac1347-ab46-4743-8658-32d2521831ad" in namespace "emptydir-477" to be "Succeeded or Failed"
Jun  1 13:51:33.433: INFO: Pod "pod-07ac1347-ab46-4743-8658-32d2521831ad": Phase="Pending", Reason="", readiness=false. Elapsed: 5.970833ms
Jun  1 13:51:35.438: INFO: Pod "pod-07ac1347-ab46-4743-8658-32d2521831ad": Phase="Running", Reason="", readiness=true. Elapsed: 2.010815698s
Jun  1 13:51:37.442: INFO: Pod "pod-07ac1347-ab46-4743-8658-32d2521831ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015346245s
STEP: Saw pod success
Jun  1 13:51:37.442: INFO: Pod "pod-07ac1347-ab46-4743-8658-32d2521831ad" satisfied condition "Succeeded or Failed"
Jun  1 13:51:37.446: INFO: Trying to get logs from node kind-worker pod pod-07ac1347-ab46-4743-8658-32d2521831ad container test-container: <nil>
STEP: delete the pod
Jun  1 13:51:37.481: INFO: Waiting for pod pod-07ac1347-ab46-4743-8658-32d2521831ad to disappear
Jun  1 13:51:37.483: INFO: Pod pod-07ac1347-ab46-4743-8658-32d2521831ad no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 13:51:37.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-477" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":196,"skipped":3170,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Jun  1 13:51:37.492: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Jun  1 13:51:37.529: INFO: Waiting up to 5m0s for pod "downward-api-eaf861b3-7a0a-4467-aa80-85354d98a94a" in namespace "downward-api-7521" to be "Succeeded or Failed"
Jun  1 13:51:37.532: INFO: Pod "downward-api-eaf861b3-7a0a-4467-aa80-85354d98a94a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.932338ms
Jun  1 13:51:39.539: INFO: Pod "downward-api-eaf861b3-7a0a-4467-aa80-85354d98a94a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009968839s
Jun  1 13:51:41.548: INFO: Pod "downward-api-eaf861b3-7a0a-4467-aa80-85354d98a94a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019157337s
STEP: Saw pod success
Jun  1 13:51:41.549: INFO: Pod "downward-api-eaf861b3-7a0a-4467-aa80-85354d98a94a" satisfied condition "Succeeded or Failed"
Jun  1 13:51:41.555: INFO: Trying to get logs from node kind-worker2 pod downward-api-eaf861b3-7a0a-4467-aa80-85354d98a94a container dapi-container: <nil>
STEP: delete the pod
Jun  1 13:51:41.581: INFO: Waiting for pod downward-api-eaf861b3-7a0a-4467-aa80-85354d98a94a to disappear
Jun  1 13:51:41.584: INFO: Pod downward-api-eaf861b3-7a0a-4467-aa80-85354d98a94a no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Jun  1 13:51:41.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7521" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":292,"completed":197,"skipped":3197,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
Jun  1 13:51:45.832: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:51:45.836: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:51:45.873: INFO: Unable to read jessie_udp@dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:51:45.877: INFO: Unable to read jessie_tcp@dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:51:45.880: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:51:45.884: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:51:45.906: INFO: Lookups using dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2 failed for: [wheezy_udp@dns-test-service.dns-6637.svc.cluster.local wheezy_tcp@dns-test-service.dns-6637.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local jessie_udp@dns-test-service.dns-6637.svc.cluster.local jessie_tcp@dns-test-service.dns-6637.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local]

Jun  1 13:51:50.913: INFO: Unable to read wheezy_udp@dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:51:50.919: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:51:50.924: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:51:50.928: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:51:50.953: INFO: Unable to read jessie_udp@dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:51:50.957: INFO: Unable to read jessie_tcp@dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:51:50.961: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:51:50.965: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:51:50.987: INFO: Lookups using dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2 failed for: [wheezy_udp@dns-test-service.dns-6637.svc.cluster.local wheezy_tcp@dns-test-service.dns-6637.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local jessie_udp@dns-test-service.dns-6637.svc.cluster.local jessie_tcp@dns-test-service.dns-6637.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local]

Jun  1 13:51:55.916: INFO: Unable to read wheezy_udp@dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:51:55.923: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:51:55.928: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:51:55.933: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:51:55.960: INFO: Unable to read jessie_udp@dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:51:55.964: INFO: Unable to read jessie_tcp@dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:51:55.967: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:51:55.970: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:51:55.991: INFO: Lookups using dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2 failed for: [wheezy_udp@dns-test-service.dns-6637.svc.cluster.local wheezy_tcp@dns-test-service.dns-6637.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local jessie_udp@dns-test-service.dns-6637.svc.cluster.local jessie_tcp@dns-test-service.dns-6637.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local]

Jun  1 13:52:00.909: INFO: Unable to read wheezy_udp@dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:52:00.913: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:52:00.917: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:52:00.921: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:52:00.949: INFO: Unable to read jessie_udp@dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:52:00.953: INFO: Unable to read jessie_tcp@dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:52:00.957: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:52:00.961: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:52:00.982: INFO: Lookups using dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2 failed for: [wheezy_udp@dns-test-service.dns-6637.svc.cluster.local wheezy_tcp@dns-test-service.dns-6637.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local jessie_udp@dns-test-service.dns-6637.svc.cluster.local jessie_tcp@dns-test-service.dns-6637.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local]

Jun  1 13:52:05.913: INFO: Unable to read wheezy_udp@dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:52:05.917: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:52:05.921: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:52:05.924: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:52:05.952: INFO: Unable to read jessie_udp@dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:52:05.956: INFO: Unable to read jessie_tcp@dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:52:05.960: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:52:05.964: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:52:05.988: INFO: Lookups using dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2 failed for: [wheezy_udp@dns-test-service.dns-6637.svc.cluster.local wheezy_tcp@dns-test-service.dns-6637.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local jessie_udp@dns-test-service.dns-6637.svc.cluster.local jessie_tcp@dns-test-service.dns-6637.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local]

Jun  1 13:52:10.913: INFO: Unable to read wheezy_udp@dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:52:10.918: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:52:10.921: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:52:10.925: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:52:10.953: INFO: Unable to read jessie_udp@dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:52:10.956: INFO: Unable to read jessie_tcp@dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:52:10.960: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:52:10.964: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local from pod dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2: the server could not find the requested resource (get pods dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2)
Jun  1 13:52:10.989: INFO: Lookups using dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2 failed for: [wheezy_udp@dns-test-service.dns-6637.svc.cluster.local wheezy_tcp@dns-test-service.dns-6637.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local jessie_udp@dns-test-service.dns-6637.svc.cluster.local jessie_tcp@dns-test-service.dns-6637.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6637.svc.cluster.local]

Jun  1 13:52:16.000: INFO: DNS probes using dns-6637/dns-test-2a1b75ee-f403-4c8a-9894-7a256187fab2 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Jun  1 13:52:16.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6637" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":292,"completed":198,"skipped":3222,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  1 13:52:16.121: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename webhook
... skipping 6 lines ...
STEP: Wait for the deployment to be ready
Jun  1 13:52:17.233: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jun  1 13:52:19.249: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616337, loc:(*time.Location)(0x8006d20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616337, loc:(*time.Location)(0x8006d20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616337, loc:(*time.Location)(0x8006d20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726616337, loc:(*time.Location)(0x8006d20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jun  1 13:52:22.262: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  test/e2e/framework/framework.go:597
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 13:52:22.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7658" for this suite.
STEP: Destroying namespace "webhook-7658-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":292,"completed":199,"skipped":3300,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
Jun  1 13:52:22.428: INFO: >>> kubeConfig: /root/.kube/kind-test-config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 13:52:28.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7900" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":292,"completed":200,"skipped":3320,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicaSet
... skipping 12 lines ...
Jun  1 13:52:33.750: INFO: Trying to dial the pod
Jun  1 13:52:38.764: INFO: Controller my-hostname-basic-b0f32e77-4dd1-4840-a7e9-0b3916145e1a: Got expected result from replica 1 [my-hostname-basic-b0f32e77-4dd1-4840-a7e9-0b3916145e1a-mfhk8]: "my-hostname-basic-b0f32e77-4dd1-4840-a7e9-0b3916145e1a-mfhk8", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  test/e2e/framework/framework.go:175
Jun  1 13:52:38.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-5643" for this suite.
•{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":292,"completed":201,"skipped":3335,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Jun  1 13:52:38.818: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-42e35652-09bc-4c69-82fb-9b2125e00466" in namespace "security-context-test-2479" to be "Succeeded or Failed"
Jun  1 13:52:38.832: INFO: Pod "alpine-nnp-false-42e35652-09bc-4c69-82fb-9b2125e00466": Phase="Pending", Reason="", readiness=false. Elapsed: 14.126451ms
Jun  1 13:52:40.837: INFO: Pod "alpine-nnp-false-42e35652-09bc-4c69-82fb-9b2125e00466": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019332411s
Jun  1 13:52:42.844: INFO: Pod "alpine-nnp-false-42e35652-09bc-4c69-82fb-9b2125e00466": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026507493s
Jun  1 13:52:42.844: INFO: Pod "alpine-nnp-false-42e35652-09bc-4c69-82fb-9b2125e00466" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Jun  1 13:52:42.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2479" for this suite.
•{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":202,"skipped":3352,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-d8b108ff-e051-43af-86b2-5e27c862900e
STEP: Creating a pod to test consume secrets
Jun  1 13:52:42.907: INFO: Waiting up to 5m0s for pod "pod-secrets-7b510a61-5494-4e00-bc62-a8336513e03b" in namespace "secrets-6579" to be "Succeeded or Failed"
Jun  1 13:52:42.909: INFO: Pod "pod-secrets-7b510a61-5494-4e00-bc62-a8336513e03b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149333ms
Jun  1 13:52:44.913: INFO: Pod "pod-secrets-7b510a61-5494-4e00-bc62-a8336513e03b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006108761s
STEP: Saw pod success
Jun  1 13:52:44.913: INFO: Pod "pod-secrets-7b510a61-5494-4e00-bc62-a8336513e03b" satisfied condition "Succeeded or Failed"
Jun  1 13:52:44.918: INFO: Trying to get logs from node kind-worker2 pod pod-secrets-7b510a61-5494-4e00-bc62-a8336513e03b container secret-volume-test: <nil>
STEP: delete the pod
Jun  1 13:52:44.938: INFO: Waiting for pod pod-secrets-7b510a61-5494-4e00-bc62-a8336513e03b to disappear
Jun  1 13:52:44.941: INFO: Pod pod-secrets-7b510a61-5494-4e00-bc62-a8336513e03b no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Jun  1 13:52:44.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6579" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":203,"skipped":3363,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 13:52:44.953: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on node default medium
Jun  1 13:52:44.994: INFO: Waiting up to 5m0s for pod "pod-e7c26704-3448-4fa2-9695-c805aef0fe04" in namespace "emptydir-630" to be "Succeeded or Failed"
Jun  1 13:52:44.997: INFO: Pod "pod-e7c26704-3448-4fa2-9695-c805aef0fe04": Phase="Pending", Reason="", readiness=false. Elapsed: 3.354173ms
Jun  1 13:52:47.003: INFO: Pod "pod-e7c26704-3448-4fa2-9695-c805aef0fe04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009363277s
Jun  1 13:52:49.009: INFO: Pod "pod-e7c26704-3448-4fa2-9695-c805aef0fe04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015073051s
STEP: Saw pod success
Jun  1 13:52:49.009: INFO: Pod "pod-e7c26704-3448-4fa2-9695-c805aef0fe04" satisfied condition "Succeeded or Failed"
Jun  1 13:52:49.017: INFO: Trying to get logs from node kind-worker pod pod-e7c26704-3448-4fa2-9695-c805aef0fe04 container test-container: <nil>
STEP: delete the pod
Jun  1 13:52:49.048: INFO: Waiting for pod pod-e7c26704-3448-4fa2-9695-c805aef0fe04 to disappear
Jun  1 13:52:49.052: INFO: Pod pod-e7c26704-3448-4fa2-9695-c805aef0fe04 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 13:52:49.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-630" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":204,"skipped":3368,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
Jun  1 13:52:49.409: INFO: stderr: ""
Jun  1 13:52:49.409: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 13:52:49.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3721" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":292,"completed":205,"skipped":3385,"failed":0}
SSSSS
------------------------------
[k8s.io] Variable Expansion 
  should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  1 13:52:49.429: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
Jun  1 13:54:49.499: INFO: Deleting pod "var-expansion-caa66380-bf6b-4073-b4a6-f2486ecc00a4" in namespace "var-expansion-3298"
Jun  1 13:54:49.505: INFO: Wait up to 5m0s for pod "var-expansion-caa66380-bf6b-4073-b4a6-f2486ecc00a4" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Jun  1 13:54:53.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3298" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":292,"completed":206,"skipped":3390,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Jun  1 13:54:53.572: INFO: Waiting up to 5m0s for pod "busybox-user-65534-fc22986c-bb76-42a4-9952-5e5ab4c2d746" in namespace "security-context-test-4768" to be "Succeeded or Failed"
Jun  1 13:54:53.576: INFO: Pod "busybox-user-65534-fc22986c-bb76-42a4-9952-5e5ab4c2d746": Phase="Pending", Reason="", readiness=false. Elapsed: 4.470845ms
Jun  1 13:54:55.580: INFO: Pod "busybox-user-65534-fc22986c-bb76-42a4-9952-5e5ab4c2d746": Phase="Running", Reason="", readiness=true. Elapsed: 2.00836514s
Jun  1 13:54:57.584: INFO: Pod "busybox-user-65534-fc22986c-bb76-42a4-9952-5e5ab4c2d746": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012786312s
Jun  1 13:54:57.585: INFO: Pod "busybox-user-65534-fc22986c-bb76-42a4-9952-5e5ab4c2d746" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Jun  1 13:54:57.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-4768" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":207,"skipped":3412,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 7 lines ...
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Jun  1 13:55:57.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8288" for this suite.
•{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":292,"completed":208,"skipped":3434,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap configmap-2860/configmap-test-1e54a40e-b425-4165-9ef0-29f86844cdab
STEP: Creating a pod to test consume configMaps
Jun  1 13:55:57.705: INFO: Waiting up to 5m0s for pod "pod-configmaps-2e349664-fa3b-4a9f-aa24-17e82e3ea7ad" in namespace "configmap-2860" to be "Succeeded or Failed"
Jun  1 13:55:57.710: INFO: Pod "pod-configmaps-2e349664-fa3b-4a9f-aa24-17e82e3ea7ad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.813887ms
Jun  1 13:55:59.716: INFO: Pod "pod-configmaps-2e349664-fa3b-4a9f-aa24-17e82e3ea7ad": Phase="Running", Reason="", readiness=true. Elapsed: 2.011067433s
Jun  1 13:56:01.721: INFO: Pod "pod-configmaps-2e349664-fa3b-4a9f-aa24-17e82e3ea7ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0160884s
STEP: Saw pod success
Jun  1 13:56:01.722: INFO: Pod "pod-configmaps-2e349664-fa3b-4a9f-aa24-17e82e3ea7ad" satisfied condition "Succeeded or Failed"
Jun  1 13:56:01.727: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-2e349664-fa3b-4a9f-aa24-17e82e3ea7ad container env-test: <nil>
STEP: delete the pod
Jun  1 13:56:01.770: INFO: Waiting for pod pod-configmaps-2e349664-fa3b-4a9f-aa24-17e82e3ea7ad to disappear
Jun  1 13:56:01.778: INFO: Pod pod-configmaps-2e349664-fa3b-4a9f-aa24-17e82e3ea7ad no longer exists
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 13:56:01.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2860" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":292,"completed":209,"skipped":3478,"failed":0}
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 130 lines ...
Jun  1 13:56:55.825: INFO: Waiting for statefulset status.replicas updated to 0
Jun  1 13:56:55.828: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Jun  1 13:56:55.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7446" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":292,"completed":210,"skipped":3481,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 13:56:55.851: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jun  1 13:56:55.889: INFO: Waiting up to 5m0s for pod "pod-231163c5-6fa9-47cb-ae57-637928c328d4" in namespace "emptydir-3391" to be "Succeeded or Failed"
Jun  1 13:56:55.893: INFO: Pod "pod-231163c5-6fa9-47cb-ae57-637928c328d4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.561325ms
Jun  1 13:56:57.904: INFO: Pod "pod-231163c5-6fa9-47cb-ae57-637928c328d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014027339s
Jun  1 13:56:59.913: INFO: Pod "pod-231163c5-6fa9-47cb-ae57-637928c328d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023389564s
STEP: Saw pod success
Jun  1 13:56:59.913: INFO: Pod "pod-231163c5-6fa9-47cb-ae57-637928c328d4" satisfied condition "Succeeded or Failed"
Jun  1 13:56:59.918: INFO: Trying to get logs from node kind-worker pod pod-231163c5-6fa9-47cb-ae57-637928c328d4 container test-container: <nil>
STEP: delete the pod
Jun  1 13:56:59.959: INFO: Waiting for pod pod-231163c5-6fa9-47cb-ae57-637928c328d4 to disappear
Jun  1 13:56:59.964: INFO: Pod pod-231163c5-6fa9-47cb-ae57-637928c328d4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 13:56:59.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3391" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":211,"skipped":3496,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Jun  1 13:57:00.013: INFO: Asynchronously running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl kubectl --server=https://127.0.0.1:41191 --kubeconfig=/root/.kube/kind-test-config proxy --unix-socket=/tmp/kubectl-proxy-unix374162863/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 13:57:00.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1416" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":292,"completed":212,"skipped":3558,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 22 lines ...
Jun  1 13:57:10.377: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3027 /api/v1/namespaces/watch-3027/configmaps/e2e-watch-test-label-changed 0fc395da-ea9d-4185-a2c8-d074a398cd4c 25630 0 2020-06-01 13:57:00 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-06-01 13:57:10 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
Jun  1 13:57:10.378: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3027 /api/v1/namespaces/watch-3027/configmaps/e2e-watch-test-label-changed 0fc395da-ea9d-4185-a2c8-d074a398cd4c 25631 0 2020-06-01 13:57:00 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-06-01 13:57:10 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Jun  1 13:57:10.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3027" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":292,"completed":213,"skipped":3578,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicaSet
... skipping 11 lines ...
Jun  1 13:57:15.519: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  test/e2e/framework/framework.go:175
Jun  1 13:57:16.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-424" for this suite.
•{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":292,"completed":214,"skipped":3586,"failed":0}
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-map-9fa774f3-6e20-4ffd-8473-053201b32c7f
STEP: Creating a pod to test consume secrets
Jun  1 13:57:16.615: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-66dd6c5e-6a56-4d76-9c6a-d1071a9a7dcf" in namespace "projected-7598" to be "Succeeded or Failed"
Jun  1 13:57:16.618: INFO: Pod "pod-projected-secrets-66dd6c5e-6a56-4d76-9c6a-d1071a9a7dcf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.760927ms
Jun  1 13:57:18.623: INFO: Pod "pod-projected-secrets-66dd6c5e-6a56-4d76-9c6a-d1071a9a7dcf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007702042s
Jun  1 13:57:20.627: INFO: Pod "pod-projected-secrets-66dd6c5e-6a56-4d76-9c6a-d1071a9a7dcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011980708s
STEP: Saw pod success
Jun  1 13:57:20.627: INFO: Pod "pod-projected-secrets-66dd6c5e-6a56-4d76-9c6a-d1071a9a7dcf" satisfied condition "Succeeded or Failed"
Jun  1 13:57:20.632: INFO: Trying to get logs from node kind-worker2 pod pod-projected-secrets-66dd6c5e-6a56-4d76-9c6a-d1071a9a7dcf container projected-secret-volume-test: <nil>
STEP: delete the pod
Jun  1 13:57:20.656: INFO: Waiting for pod pod-projected-secrets-66dd6c5e-6a56-4d76-9c6a-d1071a9a7dcf to disappear
Jun  1 13:57:20.658: INFO: Pod pod-projected-secrets-66dd6c5e-6a56-4d76-9c6a-d1071a9a7dcf no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Jun  1 13:57:20.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7598" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":215,"skipped":3589,"failed":0}
SSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 3 lines ...
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  test/e2e/common/pods.go:179
[It] should contain environment variables for services [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Jun  1 13:57:24.754: INFO: Waiting up to 5m0s for pod "client-envvars-ddcdd34a-f1db-4315-85f4-21ad33a88475" in namespace "pods-2399" to be "Succeeded or Failed"
Jun  1 13:57:24.761: INFO: Pod "client-envvars-ddcdd34a-f1db-4315-85f4-21ad33a88475": Phase="Pending", Reason="", readiness=false. Elapsed: 6.916723ms
Jun  1 13:57:26.768: INFO: Pod "client-envvars-ddcdd34a-f1db-4315-85f4-21ad33a88475": Phase="Running", Reason="", readiness=true. Elapsed: 2.013632661s
Jun  1 13:57:28.773: INFO: Pod "client-envvars-ddcdd34a-f1db-4315-85f4-21ad33a88475": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018677363s
STEP: Saw pod success
Jun  1 13:57:28.773: INFO: Pod "client-envvars-ddcdd34a-f1db-4315-85f4-21ad33a88475" satisfied condition "Succeeded or Failed"
Jun  1 13:57:28.779: INFO: Trying to get logs from node kind-worker pod client-envvars-ddcdd34a-f1db-4315-85f4-21ad33a88475 container env3cont: <nil>
STEP: delete the pod
Jun  1 13:57:28.803: INFO: Waiting for pod client-envvars-ddcdd34a-f1db-4315-85f4-21ad33a88475 to disappear
Jun  1 13:57:28.810: INFO: Pod client-envvars-ddcdd34a-f1db-4315-85f4-21ad33a88475 no longer exists
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Jun  1 13:57:28.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2399" for this suite.
•{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":292,"completed":216,"skipped":3592,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 13:57:35.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-856" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/crd_conversion_webhook.go:137
•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":292,"completed":217,"skipped":3600,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl diff 
  should check if kubectl diff finds a difference for Deployments [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 17 lines ...
Jun  1 13:57:37.991: INFO: stderr: ""
Jun  1 13:57:37.991: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 13:57:37.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2511" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":292,"completed":218,"skipped":3616,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 13 lines ...
Jun  1 13:57:38.088: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-4245 /api/v1/namespaces/watch-4245/configmaps/e2e-watch-test-resource-version 108e561d-c3f7-4c73-866a-8fcf9efd21b3 25920 0 2020-06-01 13:57:38 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-06-01 13:57:38 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Jun  1 13:57:38.088: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-4245 /api/v1/namespaces/watch-4245/configmaps/e2e-watch-test-resource-version 108e561d-c3f7-4c73-866a-8fcf9efd21b3 25921 0 2020-06-01 13:57:38 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-06-01 13:57:38 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Jun  1 13:57:38.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4245" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":292,"completed":219,"skipped":3623,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 36 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
Jun  1 13:57:50.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5729" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":292,"completed":220,"skipped":3661,"failed":0}
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-14f00599-6b5c-4d6e-87b2-a68d693c97d7
STEP: Creating a pod to test consume configMaps
Jun  1 13:57:50.356: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4547d861-966a-469e-8591-866ed64e607b" in namespace "projected-8135" to be "Succeeded or Failed"
Jun  1 13:57:50.366: INFO: Pod "pod-projected-configmaps-4547d861-966a-469e-8591-866ed64e607b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.304577ms
Jun  1 13:57:52.378: INFO: Pod "pod-projected-configmaps-4547d861-966a-469e-8591-866ed64e607b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021697681s
Jun  1 13:57:54.384: INFO: Pod "pod-projected-configmaps-4547d861-966a-469e-8591-866ed64e607b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02715656s
STEP: Saw pod success
Jun  1 13:57:54.384: INFO: Pod "pod-projected-configmaps-4547d861-966a-469e-8591-866ed64e607b" satisfied condition "Succeeded or Failed"
Jun  1 13:57:54.389: INFO: Trying to get logs from node kind-worker pod pod-projected-configmaps-4547d861-966a-469e-8591-866ed64e607b container projected-configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 13:57:54.414: INFO: Waiting for pod pod-projected-configmaps-4547d861-966a-469e-8591-866ed64e607b to disappear
Jun  1 13:57:54.417: INFO: Pod pod-projected-configmaps-4547d861-966a-469e-8591-866ed64e607b no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Jun  1 13:57:54.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8135" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":292,"completed":221,"skipped":3662,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 14 lines ...
Jun  1 13:57:59.489: INFO: Trying to dial the pod
Jun  1 13:58:04.504: INFO: Controller my-hostname-basic-d3b0d867-daef-46e8-aa2d-aeed442ae22b: Got expected result from replica 1 [my-hostname-basic-d3b0d867-daef-46e8-aa2d-aeed442ae22b-8wn4h]: "my-hostname-basic-d3b0d867-daef-46e8-aa2d-aeed442ae22b-8wn4h", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
Jun  1 13:58:04.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3952" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":292,"completed":222,"skipped":3672,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl server-side dry-run 
  should check if kubectl can dry-run update Pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 16 lines ...
Jun  1 13:58:05.114: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-06-01T13:58:04Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"managedFields\": [\n            {\n                \"apiVersion\": \"v1\",\n                \"fieldsType\": \"FieldsV1\",\n                \"fieldsV1\": {\n                    \"f:metadata\": {\n                        \"f:labels\": {\n                            \".\": {},\n                            \"f:run\": {}\n                        }\n                    },\n                    \"f:spec\": {\n                        \"f:containers\": {\n                            \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n                                \".\": {},\n                                \"f:image\": {},\n                                \"f:imagePullPolicy\": {},\n                                \"f:name\": {},\n                                \"f:resources\": {},\n                                \"f:terminationMessagePath\": {},\n                                \"f:terminationMessagePolicy\": {}\n                            }\n                        },\n                        \"f:dnsPolicy\": {},\n                        \"f:enableServiceLinks\": {},\n                        \"f:restartPolicy\": {},\n                        \"f:schedulerName\": {},\n                        \"f:securityContext\": {},\n                        \"f:terminationGracePeriodSeconds\": {}\n                    }\n                },\n                \"manager\": \"kubectl-run\",\n                \"operation\": \"Update\",\n                \"time\": \"2020-06-01T13:58:04Z\"\n            },\n            {\n                \"apiVersion\": \"v1\",\n                \"fieldsType\": \"FieldsV1\",\n                \"fieldsV1\": {\n                    \"f:status\": {\n                        \"f:conditions\": {\n                            \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:message\": {},\n                                \"f:reason\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            },\n                            \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            },\n                            \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:message\": {},\n                                \"f:reason\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            }\n                        },\n                        \"f:containerStatuses\": {},\n                        \"f:hostIP\": {},\n                        \"f:startTime\": {}\n                    }\n                },\n                \"manager\": \"kubelet\",\n                \"operation\": \"Update\",\n                \"time\": \"2020-06-01T13:58:04Z\"\n            }\n        ],\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-9210\",\n        \"resourceVersion\": \"26153\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-9210/pods/e2e-test-httpd-pod\",\n        \"uid\": \"828cc5eb-e4e5-47cc-9750-8a54bca2cdaf\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-84gx9\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"kind-worker2\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-84gx9\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-84gx9\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-06-01T13:58:04Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-06-01T13:58:04Z\",\n                \"message\": \"containers with unready status: [e2e-test-httpd-pod]\",\n                \"reason\": \"ContainersNotReady\",\n                \"status\": \"False\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-06-01T13:58:04Z\",\n                \"message\": \"containers with unready status: [e2e-test-httpd-pod]\",\n                \"reason\": \"ContainersNotReady\",\n                \"status\": \"False\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-06-01T13:58:04Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imageID\": \"\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": false,\n                \"restartCount\": 0,\n                \"started\": false,\n                \"state\": {\n                    \"waiting\": {\n                        \"reason\": \"ContainerCreating\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.18.0.3\",\n        \"phase\": \"Pending\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-06-01T13:58:04Z\"\n    }\n}\n"
Jun  1 13:58:05.114: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:41191 --kubeconfig=/root/.kube/kind-test-config replace -f - --dry-run server --namespace=kubectl-9210'
Jun  1 13:58:05.876: INFO: stderr: "W0601 13:58:05.306101  143230 helpers.go:553] --dry-run is deprecated and can be replaced with --dry-run=client.\n"
Jun  1 13:58:05.876: INFO: stdout: "pod/e2e-test-httpd-pod replaced (dry run)\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/httpd:2.4.38-alpine
Jun  1 13:58:05.884: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:41191 --kubeconfig=/root/.kube/kind-test-config delete pods e2e-test-httpd-pod --namespace=kubectl-9210'
{"component":"entrypoint","file":"prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","time":"2020-06-01T14:30:09Z"}
{"component":"entrypoint","file":"prow/entrypoint/run.go:245","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15s grace period","time":"2020-06-01T14:30:24Z"}