This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2020-05-23 02:33
Elapsed1h36m
Revision
Builderb30e5a82-9c9d-11ea-b32e-1a4cbe0cfe79
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/47f8fd9a-273a-45bb-aec0-d45eb14a2dfe/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/47f8fd9a-273a-45bb-aec0-d45eb14a2dfe/targets/test
infra-commit5ebaf9c52
repok8s.io/test-infra
repo-commit5ebaf9c524912eaadc64eef07986e66d7c1a5d22
repos{u'k8s.io/kubernetes': u'master', u'k8s.io/test-infra': u'master'}

No Test Failures!


Error lines from build-log.txt

... skipping 102 lines ...
W0523 02:35:10.902] Analyzing: 4 targets (20 packages loaded, 27 targets configured)
W0523 02:35:12.394] Analyzing: 4 targets (433 packages loaded, 996 targets configured)
W0523 02:35:14.100] Analyzing: 4 targets (1659 packages loaded, 6765 targets configured)
W0523 02:35:16.907] Analyzing: 4 targets (2285 packages loaded, 15486 targets configured)
W0523 02:35:27.948] Analyzing: 4 targets (2285 packages loaded, 15486 targets configured)
W0523 02:35:32.874] Analyzing: 4 targets (2286 packages loaded, 15486 targets configured)
W0523 02:35:34.273] DEBUG: /bazel-scratch/.cache/bazel/_bazel_root/48d5366022b4e3197674c8d6e2bee219/external/bazel_gazelle/internal/go_repository.bzl:184:13: org_golang_x_tools: gazelle: /bazel-scratch/.cache/bazel/_bazel_root/48d5366022b4e3197674c8d6e2bee219/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/48d5366022b4e3197674c8d6e2bee219/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go:2:43: expected 'package', found 'EOF'
W0523 02:35:34.281] gazelle: found packages pointer (pointer.go) and server (issue29198.go) in /bazel-scratch/.cache/bazel/_bazel_root/48d5366022b4e3197674c8d6e2bee219/external/org_golang_x_tools/go/internal/gccgoimporter/testdata
W0523 02:35:34.282] gazelle: found packages p (issue20046.go) and issue25301 (issue25301.go) in /bazel-scratch/.cache/bazel/_bazel_root/48d5366022b4e3197674c8d6e2bee219/external/org_golang_x_tools/go/internal/gcimporter/testdata
W0523 02:35:34.282] gazelle: /bazel-scratch/.cache/bazel/_bazel_root/48d5366022b4e3197674c8d6e2bee219/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/48d5366022b4e3197674c8d6e2bee219/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go:1:34: expected 'package', found 'EOF'
W0523 02:35:34.283] gazelle: /bazel-scratch/.cache/bazel/_bazel_root/48d5366022b4e3197674c8d6e2bee219/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/48d5366022b4e3197674c8d6e2bee219/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go:1:16: expected ';', found '.'
W0523 02:35:34.283] gazelle: /bazel-scratch/.cache/bazel/_bazel_root/48d5366022b4e3197674c8d6e2bee219/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/48d5366022b4e3197674c8d6e2bee219/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go:1:16: expected ';', found '.'
W0523 02:35:34.284] gazelle: /bazel-scratch/.cache/bazel/_bazel_root/48d5366022b4e3197674c8d6e2bee219/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/48d5366022b4e3197674c8d6e2bee219/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go:1:16: expected ';', found '.'
W0523 02:35:34.284] gazelle: /bazel-scratch/.cache/bazel/_bazel_root/48d5366022b4e3197674c8d6e2bee219/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/48d5366022b4e3197674c8d6e2bee219/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go:1:16: expected ';', found '.'
W0523 02:35:34.285] gazelle: /bazel-scratch/.cache/bazel/_bazel_root/48d5366022b4e3197674c8d6e2bee219/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/48d5366022b4e3197674c8d6e2bee219/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go:1:16: expected ';', found '.'
W0523 02:35:34.285] gazelle: /bazel-scratch/.cache/bazel/_bazel_root/48d5366022b4e3197674c8d6e2bee219/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/48d5366022b4e3197674c8d6e2bee219/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go:1:16: expected ';', found '.'
W0523 02:35:34.286] gazelle: /bazel-scratch/.cache/bazel/_bazel_root/48d5366022b4e3197674c8d6e2bee219/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/48d5366022b4e3197674c8d6e2bee219/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go:1:16: expected ';', found '.'
W0523 02:35:34.286] gazelle: finding module path for import domain.name/importdecl: exit status 1: can't load package: package domain.name/importdecl: cannot find module providing package domain.name/importdecl
W0523 02:35:34.286] gazelle: finding module path for import old.com/one: exit status 1: can't load package: package old.com/one: cannot find module providing package old.com/one
W0523 02:35:34.287] gazelle: finding module path for import titanic.biz/bar: exit status 1: can't load package: package titanic.biz/bar: cannot find module providing package titanic.biz/bar
W0523 02:35:34.287] gazelle: finding module path for import titanic.biz/foo: exit status 1: can't load package: package titanic.biz/foo: cannot find module providing package titanic.biz/foo
W0523 02:35:34.287] gazelle: finding module path for import fruit.io/pear: exit status 1: can't load package: package fruit.io/pear: cannot find module providing package fruit.io/pear
W0523 02:35:34.288] gazelle: finding module path for import fruit.io/banana: exit status 1: can't load package: package fruit.io/banana: cannot find module providing package fruit.io/banana
... skipping 153 lines ...
W0523 02:38:27.815] localAPIEndpoint:
W0523 02:38:27.815]   advertiseAddress: 172.17.0.4
W0523 02:38:27.815]   bindPort: 6443
W0523 02:38:27.815] nodeRegistration:
W0523 02:38:27.816]   criSocket: /run/containerd/containerd.sock
W0523 02:38:27.816]   kubeletExtraArgs:
W0523 02:38:27.816]     fail-swap-on: "false"
W0523 02:38:27.816]     node-ip: 172.17.0.4
W0523 02:38:27.816] ---
W0523 02:38:27.816] apiVersion: kubeadm.k8s.io/v1beta2
W0523 02:38:27.816] discovery:
W0523 02:38:27.816]   bootstrapToken:
W0523 02:38:27.817]     apiServerEndpoint: 172.17.0.3:6443
W0523 02:38:27.817]     token: abcdef.0123456789abcdef
W0523 02:38:27.817]     unsafeSkipCAVerification: true
W0523 02:38:27.817] kind: JoinConfiguration
W0523 02:38:27.817] nodeRegistration:
W0523 02:38:27.817]   criSocket: /run/containerd/containerd.sock
W0523 02:38:27.817]   kubeletExtraArgs:
W0523 02:38:27.817]     fail-swap-on: "false"
W0523 02:38:27.817]     node-ip: 172.17.0.4
W0523 02:38:27.818] ---
W0523 02:38:27.818] apiVersion: kubelet.config.k8s.io/v1beta1
W0523 02:38:27.818] evictionHard:
W0523 02:38:27.818]   imagefs.available: 0%
W0523 02:38:27.818]   nodefs.available: 0%
... skipping 29 lines ...
W0523 02:38:27.821] localAPIEndpoint:
W0523 02:38:27.821]   advertiseAddress: 172.17.0.2
W0523 02:38:27.821]   bindPort: 6443
W0523 02:38:27.822] nodeRegistration:
W0523 02:38:27.822]   criSocket: /run/containerd/containerd.sock
W0523 02:38:27.822]   kubeletExtraArgs:
W0523 02:38:27.822]     fail-swap-on: "false"
W0523 02:38:27.822]     node-ip: 172.17.0.2
W0523 02:38:27.822] ---
W0523 02:38:27.822] apiVersion: kubeadm.k8s.io/v1beta2
W0523 02:38:27.822] discovery:
W0523 02:38:27.822]   bootstrapToken:
W0523 02:38:27.823]     apiServerEndpoint: 172.17.0.3:6443
W0523 02:38:27.823]     token: abcdef.0123456789abcdef
W0523 02:38:27.823]     unsafeSkipCAVerification: true
W0523 02:38:27.823] kind: JoinConfiguration
W0523 02:38:27.823] nodeRegistration:
W0523 02:38:27.823]   criSocket: /run/containerd/containerd.sock
W0523 02:38:27.823]   kubeletExtraArgs:
W0523 02:38:27.823]     fail-swap-on: "false"
W0523 02:38:27.824]     node-ip: 172.17.0.2
W0523 02:38:27.824] ---
W0523 02:38:27.824] apiVersion: kubelet.config.k8s.io/v1beta1
W0523 02:38:27.824] evictionHard:
W0523 02:38:27.824]   imagefs.available: 0%
W0523 02:38:27.824]   nodefs.available: 0%
... skipping 29 lines ...
W0523 02:38:27.827] localAPIEndpoint:
W0523 02:38:27.828]   advertiseAddress: 172.17.0.3
W0523 02:38:27.828]   bindPort: 6443
W0523 02:38:27.828] nodeRegistration:
W0523 02:38:27.828]   criSocket: /run/containerd/containerd.sock
W0523 02:38:27.828]   kubeletExtraArgs:
W0523 02:38:27.828]     fail-swap-on: "false"
W0523 02:38:27.828]     node-ip: 172.17.0.3
W0523 02:38:27.828] ---
W0523 02:38:27.831] apiVersion: kubeadm.k8s.io/v1beta2
W0523 02:38:27.832] controlPlane:
W0523 02:38:27.832]   localAPIEndpoint:
W0523 02:38:27.832]     advertiseAddress: 172.17.0.3
... skipping 4 lines ...
W0523 02:38:27.832]     token: abcdef.0123456789abcdef
W0523 02:38:27.832]     unsafeSkipCAVerification: true
W0523 02:38:27.833] kind: JoinConfiguration
W0523 02:38:27.833] nodeRegistration:
W0523 02:38:27.833]   criSocket: /run/containerd/containerd.sock
W0523 02:38:27.833]   kubeletExtraArgs:
W0523 02:38:27.833]     fail-swap-on: "false"
W0523 02:38:27.833]     node-ip: 172.17.0.3
W0523 02:38:27.833] ---
W0523 02:38:27.833] apiVersion: kubelet.config.k8s.io/v1beta1
W0523 02:38:27.833] evictionHard:
W0523 02:38:27.833]   imagefs.available: 0%
W0523 02:38:27.833]   nodefs.available: 0%
... skipping 36 lines ...
W0523 02:38:58.270] I0523 02:38:29.274675     132 checks.go:376] validating the presence of executable ebtables
W0523 02:38:58.271] I0523 02:38:29.274726     132 checks.go:376] validating the presence of executable ethtool
W0523 02:38:58.271] I0523 02:38:29.274758     132 checks.go:376] validating the presence of executable socat
W0523 02:38:58.271] I0523 02:38:29.274801     132 checks.go:376] validating the presence of executable tc
W0523 02:38:58.271] I0523 02:38:29.274843     132 checks.go:376] validating the presence of executable touch
W0523 02:38:58.271] I0523 02:38:29.274888     132 checks.go:520] running all checks
W0523 02:38:58.271] [preflight] The system verification failed. Printing the output from the verification:
W0523 02:38:58.272] KERNEL_VERSION: 4.15.0-1044-gke
W0523 02:38:58.272] OS: Linux
W0523 02:38:58.272] CGROUPS_CPU: enabled
W0523 02:38:58.272] CGROUPS_CPUACCT: enabled
W0523 02:38:58.272] CGROUPS_CPUSET: enabled
W0523 02:38:58.272] CGROUPS_DEVICES: enabled
W0523 02:38:58.273] CGROUPS_FREEZER: enabled
W0523 02:38:58.273] CGROUPS_MEMORY: enabled
W0523 02:38:58.273] CGROUPS_HUGETLB: enabled
W0523 02:38:58.273] CGROUPS_PIDS: enabled
W0523 02:38:58.273] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
W0523 02:38:58.273] I0523 02:38:29.292290     132 checks.go:406] checking whether the given node name is reachable using net.LookupHost
W0523 02:38:58.274] I0523 02:38:29.294886     132 checks.go:618] validating kubelet version
W0523 02:38:58.274] I0523 02:38:29.396441     132 checks.go:128] validating if the "kubelet" service is enabled and active
W0523 02:38:58.274] I0523 02:38:29.410571     132 checks.go:201] validating availability of port 10250
W0523 02:38:58.274] I0523 02:38:29.410649     132 checks.go:201] validating availability of port 2379
W0523 02:38:58.274] I0523 02:38:29.410679     132 checks.go:201] validating availability of port 2380
... skipping 98 lines ...
W0523 02:38:58.291] I0523 02:38:47.526133     132 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=10s  in 0 milliseconds
W0523 02:38:58.291] I0523 02:38:48.026098     132 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=10s  in 0 milliseconds
W0523 02:38:58.292] I0523 02:38:48.526111     132 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=10s  in 0 milliseconds
W0523 02:38:58.292] I0523 02:38:49.026079     132 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=10s  in 0 milliseconds
W0523 02:38:58.292] I0523 02:38:49.526109     132 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=10s  in 0 milliseconds
W0523 02:38:58.292] I0523 02:38:50.026214     132 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=10s  in 0 milliseconds
W0523 02:38:58.292] I0523 02:38:55.036030     132 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=10s 500 Internal Server Error in 4510 milliseconds
W0523 02:38:58.293] I0523 02:38:55.528496     132 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=10s 500 Internal Server Error in 2 milliseconds
W0523 02:38:58.293] I0523 02:38:56.027351     132 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=10s 500 Internal Server Error in 1 milliseconds
W0523 02:38:58.293] I0523 02:38:56.528137     132 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=10s 200 OK in 2 milliseconds
W0523 02:38:58.293] [apiclient] All control plane components are healthy after 22.507097 seconds
W0523 02:38:58.294] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
W0523 02:38:58.294] I0523 02:38:56.528461     132 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
W0523 02:38:58.294] I0523 02:38:56.537082     132 round_trippers.go:443] POST https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 7 milliseconds
W0523 02:38:58.294] I0523 02:38:56.541015     132 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 3 milliseconds
... skipping 110 lines ...
W0523 02:39:26.567] I0523 02:39:01.330820     454 checks.go:376] validating the presence of executable ebtables
W0523 02:39:26.567] I0523 02:39:01.330865     454 checks.go:376] validating the presence of executable ethtool
W0523 02:39:26.567] I0523 02:39:01.330896     454 checks.go:376] validating the presence of executable socat
W0523 02:39:26.568] I0523 02:39:01.330926     454 checks.go:376] validating the presence of executable tc
W0523 02:39:26.568] I0523 02:39:01.330953     454 checks.go:376] validating the presence of executable touch
W0523 02:39:26.568] I0523 02:39:01.331011     454 checks.go:520] running all checks
W0523 02:39:26.568] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
W0523 02:39:26.569] I0523 02:39:01.336966     454 checks.go:406] checking whether the given node name is reachable using net.LookupHost
W0523 02:39:26.569] [preflight] The system verification failed. Printing the output from the verification:
W0523 02:39:26.569] KERNEL_VERSION: 4.15.0-1044-gke
W0523 02:39:26.569] OS: Linux
W0523 02:39:26.569] CGROUPS_CPU: enabled
W0523 02:39:26.569] CGROUPS_CPUACCT: enabled
W0523 02:39:26.569] CGROUPS_CPUSET: enabled
W0523 02:39:26.570] CGROUPS_DEVICES: enabled
... skipping 83 lines ...
W0523 02:39:26.587] I0523 02:39:01.336129     465 checks.go:376] validating the presence of executable ebtables
W0523 02:39:26.587] I0523 02:39:01.336172     465 checks.go:376] validating the presence of executable ethtool
W0523 02:39:26.587] I0523 02:39:01.336198     465 checks.go:376] validating the presence of executable socat
W0523 02:39:26.587] I0523 02:39:01.336243     465 checks.go:376] validating the presence of executable tc
W0523 02:39:26.588] I0523 02:39:01.336269     465 checks.go:376] validating the presence of executable touch
W0523 02:39:26.588] I0523 02:39:01.336305     465 checks.go:520] running all checks
W0523 02:39:26.588] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
W0523 02:39:26.588] I0523 02:39:01.341219     465 checks.go:406] checking whether the given node name is reachable using net.LookupHost
W0523 02:39:26.588] [preflight] The system verification failed. Printing the output from the verification:
W0523 02:39:26.589] KERNEL_VERSION: 4.15.0-1044-gke
W0523 02:39:26.589] OS: Linux
W0523 02:39:26.589] CGROUPS_CPU: enabled
W0523 02:39:26.589] CGROUPS_CPUACCT: enabled
W0523 02:39:26.589] CGROUPS_CPUSET: enabled
W0523 02:39:26.589] CGROUPS_DEVICES: enabled
... skipping 1257 lines ...
I0523 04:09:01.231] [04:09:01] Pod status is: Running
I0523 04:09:06.340] [04:09:06] Pod status is: Running
I0523 04:09:11.439] [04:09:11] Pod status is: Running
I0523 04:09:16.558] [04:09:16] Pod status is: Running
I0523 04:09:21.663] [04:09:21] Pod status is: Running
I0523 04:09:26.779] [04:09:26] Pod status is: Running
I0523 04:09:31.898] [04:09:31] Pod status is: Failed
I0523 04:09:31.898] [04:09:31] Failed.
I0523 04:09:32.005] Name:         e2e-conformance-test
I0523 04:09:32.005] Namespace:    conformance
I0523 04:09:32.005] Priority:     0
I0523 04:09:32.005] Node:         kind-worker2/172.17.0.2
I0523 04:09:32.005] Start Time:   Sat, 23 May 2020 02:44:44 +0000
I0523 04:09:32.006] Labels:       <none>
I0523 04:09:32.006] Annotations:  <none>
I0523 04:09:32.006] Status:       Failed
I0523 04:09:32.006] IP:           10.244.2.3
I0523 04:09:32.006] IPs:
I0523 04:09:32.006]   IP:  10.244.2.3
I0523 04:09:32.006] Containers:
I0523 04:09:32.006]   conformance-container:
I0523 04:09:32.007]     Container ID:   containerd://248290282939a2d6636c9f92c812578afd1e8cf363f3413adc1db9e81e0922c7
I0523 04:09:32.007]     Image:          k8s.gcr.io/conformance-amd64:v1.19.0-beta.0.135
I0523 04:09:32.007]     Image ID:       sha256:4cff70c92b2571a20d94f59100e60190cf4224ca7831cd1425612334c14181c2
I0523 04:09:32.007]     Port:           <none>
I0523 04:09:32.007]     Host Port:      <none>
I0523 04:09:32.007]     State:          Terminated
I0523 04:09:32.007]       Reason:       Error
I0523 04:09:32.007]       Exit Code:    1
I0523 04:09:32.008]       Started:      Sat, 23 May 2020 02:44:48 +0000
I0523 04:09:32.008]       Finished:     Sat, 23 May 2020 04:09:29 +0000
I0523 04:09:32.008]     Ready:          False
I0523 04:09:32.008]     Restart Count:  0
I0523 04:09:32.008]     Environment:
... skipping 27 lines ...
I0523 04:09:32.012] Events:          <none>
I0523 04:09:32.118] + /usr/local/bin/ginkgo '--focus=\[Conformance\]' --skip= --noColor=true /usr/local/bin/e2e.test -- --disable-log-dump --repo-root=/kubernetes --provider=skeleton --report-dir=/tmp/results --kubeconfig= -v=4
I0523 04:09:32.118] ++ tee /tmp/results/e2e.log
I0523 04:09:32.118] I0523 02:44:49.422585      17 test_context.go:414] Using a temporary kubeconfig file from in-cluster config : /tmp/kubeconfig-142995523
I0523 04:09:32.119] I0523 02:44:49.422613      17 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
I0523 04:09:32.119] I0523 02:44:49.422723      17 e2e.go:129] Starting e2e run "f5abc344-565c-4bc6-a6e6-d611de55dd5b" on Ginkgo node 1
I0523 04:09:32.119] {"msg":"Test Suite starting","total":292,"completed":0,"skipped":0,"failed":0}
I0523 04:09:32.119] Running Suite: Kubernetes e2e suite
I0523 04:09:32.119] ===================================
I0523 04:09:32.119] Random Seed: 1590201888 - Will randomize all specs
I0523 04:09:32.120] Will run 292 of 5098 specs
I0523 04:09:32.120] 
I0523 04:09:32.120] May 23 02:44:49.440: INFO: >>> kubeConfig: /tmp/kubeconfig-142995523
... skipping 45 lines ...
I0523 04:09:32.127] • [SLOW TEST:11.174 seconds]
I0523 04:09:32.127] [sig-api-machinery] ResourceQuota
I0523 04:09:32.127] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:32.127]   should create a ResourceQuota and capture the life of a replica set. [Conformance]
I0523 04:09:32.127]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.127] ------------------------------
I0523 04:09:32.128] {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":292,"completed":1,"skipped":13,"failed":0}
I0523 04:09:32.128] [sig-network] Proxy version v1 
I0523 04:09:32.128]   should proxy through a service and a pod  [Conformance]
I0523 04:09:32.128]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.128] [BeforeEach] version v1
I0523 04:09:32.129]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
I0523 04:09:32.129] STEP: Creating a kubernetes client
... skipping 365 lines ...
I0523 04:09:32.215] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
I0523 04:09:32.215]   version v1
I0523 04:09:32.215]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59
I0523 04:09:32.215]     should proxy through a service and a pod  [Conformance]
I0523 04:09:32.215]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.215] ------------------------------
I0523 04:09:32.216] {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":292,"completed":2,"skipped":13,"failed":0}
I0523 04:09:32.216] SSSSSSSSSSSSSSSSSS
I0523 04:09:32.216] ------------------------------
I0523 04:09:32.216] [sig-api-machinery] ResourceQuota 
I0523 04:09:32.216]   should create a ResourceQuota and capture the life of a replication controller. [Conformance]
I0523 04:09:32.216]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.217] [BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 26 lines ...
I0523 04:09:32.222] • [SLOW TEST:11.181 seconds]
I0523 04:09:32.222] [sig-api-machinery] ResourceQuota
I0523 04:09:32.223] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:32.223]   should create a ResourceQuota and capture the life of a replication controller. [Conformance]
I0523 04:09:32.223]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.223] ------------------------------
I0523 04:09:32.223] {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":292,"completed":3,"skipped":31,"failed":0}
I0523 04:09:32.223] SS
I0523 04:09:32.224] ------------------------------
I0523 04:09:32.224] [sig-network] Services 
I0523 04:09:32.224]   should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]
I0523 04:09:32.224]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.224] [BeforeEach] [sig-network] Services
... skipping 88 lines ...
I0523 04:09:32.242] • [SLOW TEST:39.050 seconds]
I0523 04:09:32.242] [sig-network] Services
I0523 04:09:32.242] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
I0523 04:09:32.242]   should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]
I0523 04:09:32.243]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.243] ------------------------------
I0523 04:09:32.243] {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":292,"completed":4,"skipped":33,"failed":0}
I0523 04:09:32.243] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:32.243] ------------------------------
I0523 04:09:32.243] [sig-storage] EmptyDir volumes 
I0523 04:09:32.243]   should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:32.244]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.244] [BeforeEach] [sig-storage] EmptyDir volumes
... skipping 9 lines ...
I0523 04:09:32.246] I0523 02:46:06.786411      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.246] I0523 02:46:06.786438      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.246] [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:32.246]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.246] I0523 02:46:06.788719      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.247] STEP: Creating a pod to test emptydir 0644 on tmpfs
I0523 04:09:32.247] May 23 02:46:06.797: INFO: Waiting up to 5m0s for pod "pod-5a8d5859-3736-4a78-b6c7-4379f241597b" in namespace "emptydir-7474" to be "Succeeded or Failed"
I0523 04:09:32.247] May 23 02:46:06.799: INFO: Pod "pod-5a8d5859-3736-4a78-b6c7-4379f241597b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165333ms
I0523 04:09:32.247] May 23 02:46:08.803: INFO: Pod "pod-5a8d5859-3736-4a78-b6c7-4379f241597b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005562392s
I0523 04:09:32.247] STEP: Saw pod success
I0523 04:09:32.248] May 23 02:46:08.803: INFO: Pod "pod-5a8d5859-3736-4a78-b6c7-4379f241597b" satisfied condition "Succeeded or Failed"
I0523 04:09:32.248] May 23 02:46:08.805: INFO: Trying to get logs from node kind-worker pod pod-5a8d5859-3736-4a78-b6c7-4379f241597b container test-container: <nil>
I0523 04:09:32.248] STEP: delete the pod
I0523 04:09:32.248] May 23 02:46:08.833: INFO: Waiting for pod pod-5a8d5859-3736-4a78-b6c7-4379f241597b to disappear
I0523 04:09:32.248] May 23 02:46:08.835: INFO: Pod pod-5a8d5859-3736-4a78-b6c7-4379f241597b no longer exists
I0523 04:09:32.248] [AfterEach] [sig-storage] EmptyDir volumes
I0523 04:09:32.249]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.249] May 23 02:46:08.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.249] STEP: Destroying namespace "emptydir-7474" for this suite.
I0523 04:09:32.249] •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":5,"skipped":120,"failed":0}
I0523 04:09:32.249] SSSSSSSSSS
I0523 04:09:32.249] ------------------------------
I0523 04:09:32.250] [sig-storage] Projected configMap 
I0523 04:09:32.250]   optional updates should be reflected in volume [NodeConformance] [Conformance]
I0523 04:09:32.250]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.250] [BeforeEach] [sig-storage] Projected configMap
... skipping 26 lines ...
I0523 04:09:32.255] • [SLOW TEST:88.466 seconds]
I0523 04:09:32.255] [sig-storage] Projected configMap
I0523 04:09:32.255] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
I0523 04:09:32.255]   optional updates should be reflected in volume [NodeConformance] [Conformance]
I0523 04:09:32.255]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.255] ------------------------------
I0523 04:09:32.255] {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":6,"skipped":130,"failed":0}
I0523 04:09:32.256] SSSSSSSSSS
I0523 04:09:32.256] ------------------------------
I0523 04:09:32.256] [sig-storage] Projected configMap 
I0523 04:09:32.256]   should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:32.256]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.256] [BeforeEach] [sig-storage] Projected configMap
... skipping 10 lines ...
I0523 04:09:32.258] I0523 02:47:37.430132      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.259] [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:32.259]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.259] I0523 02:47:37.432412      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.259] STEP: Creating configMap with name projected-configmap-test-volume-map-f1a1ae2a-d731-4e0a-8ab7-7d6d665c0766
I0523 04:09:32.260] STEP: Creating a pod to test consume configMaps
I0523 04:09:32.260] May 23 02:47:37.440: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-900a2f8b-c4c4-471d-b2a0-0f1352ac7cac" in namespace "projected-3952" to be "Succeeded or Failed"
I0523 04:09:32.260] May 23 02:47:37.442: INFO: Pod "pod-projected-configmaps-900a2f8b-c4c4-471d-b2a0-0f1352ac7cac": Phase="Pending", Reason="", readiness=false. Elapsed: 1.99957ms
I0523 04:09:32.260] May 23 02:47:39.445: INFO: Pod "pod-projected-configmaps-900a2f8b-c4c4-471d-b2a0-0f1352ac7cac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005043343s
I0523 04:09:32.260] STEP: Saw pod success
I0523 04:09:32.261] May 23 02:47:39.445: INFO: Pod "pod-projected-configmaps-900a2f8b-c4c4-471d-b2a0-0f1352ac7cac" satisfied condition "Succeeded or Failed"
I0523 04:09:32.261] May 23 02:47:39.447: INFO: Trying to get logs from node kind-worker pod pod-projected-configmaps-900a2f8b-c4c4-471d-b2a0-0f1352ac7cac container projected-configmap-volume-test: <nil>
I0523 04:09:32.261] STEP: delete the pod
I0523 04:09:32.261] May 23 02:47:39.470: INFO: Waiting for pod pod-projected-configmaps-900a2f8b-c4c4-471d-b2a0-0f1352ac7cac to disappear
I0523 04:09:32.261] May 23 02:47:39.472: INFO: Pod pod-projected-configmaps-900a2f8b-c4c4-471d-b2a0-0f1352ac7cac no longer exists
I0523 04:09:32.262] [AfterEach] [sig-storage] Projected configMap
I0523 04:09:32.262]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.262] May 23 02:47:39.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.262] STEP: Destroying namespace "projected-3952" for this suite.
I0523 04:09:32.262] •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":7,"skipped":140,"failed":0}
I0523 04:09:32.263] SSSSSSSSSSSSSSSSSS
I0523 04:09:32.263] ------------------------------
I0523 04:09:32.263] [sig-api-machinery] Garbage collector 
I0523 04:09:32.263]   should delete pods created by rc when not orphaning [Conformance]
I0523 04:09:32.263]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.263] [BeforeEach] [sig-api-machinery] Garbage collector
... skipping 47 lines ...
I0523 04:09:32.270] • [SLOW TEST:10.153 seconds]
I0523 04:09:32.270] [sig-api-machinery] Garbage collector
I0523 04:09:32.270] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:32.271]   should delete pods created by rc when not orphaning [Conformance]
I0523 04:09:32.271]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.271] ------------------------------
I0523 04:09:32.271] {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":292,"completed":8,"skipped":158,"failed":0}
I0523 04:09:32.271] SSSSSSSSSSS
I0523 04:09:32.271] ------------------------------
I0523 04:09:32.272] [sig-storage] EmptyDir volumes 
I0523 04:09:32.272]   should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:32.272]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.272] [BeforeEach] [sig-storage] EmptyDir volumes
... skipping 9 lines ...
I0523 04:09:32.274] I0523 02:47:49.756033      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.274] I0523 02:47:49.756056      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.274] [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:32.275]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.275] I0523 02:47:49.758242      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.275] STEP: Creating a pod to test emptydir 0777 on node default medium
I0523 04:09:32.275] May 23 02:47:49.763: INFO: Waiting up to 5m0s for pod "pod-1716c34c-542d-425e-aa28-2abfffe9e577" in namespace "emptydir-9772" to be "Succeeded or Failed"
I0523 04:09:32.275] May 23 02:47:49.765: INFO: Pod "pod-1716c34c-542d-425e-aa28-2abfffe9e577": Phase="Pending", Reason="", readiness=false. Elapsed: 1.905575ms
I0523 04:09:32.276] May 23 02:47:51.768: INFO: Pod "pod-1716c34c-542d-425e-aa28-2abfffe9e577": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005059945s
I0523 04:09:32.276] STEP: Saw pod success
I0523 04:09:32.276] May 23 02:47:51.768: INFO: Pod "pod-1716c34c-542d-425e-aa28-2abfffe9e577" satisfied condition "Succeeded or Failed"
I0523 04:09:32.276] May 23 02:47:51.770: INFO: Trying to get logs from node kind-worker pod pod-1716c34c-542d-425e-aa28-2abfffe9e577 container test-container: <nil>
I0523 04:09:32.276] STEP: delete the pod
I0523 04:09:32.276] May 23 02:47:51.782: INFO: Waiting for pod pod-1716c34c-542d-425e-aa28-2abfffe9e577 to disappear
I0523 04:09:32.277] May 23 02:47:51.784: INFO: Pod pod-1716c34c-542d-425e-aa28-2abfffe9e577 no longer exists
I0523 04:09:32.277] [AfterEach] [sig-storage] EmptyDir volumes
I0523 04:09:32.277]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.277] May 23 02:47:51.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.277] STEP: Destroying namespace "emptydir-9772" for this suite.
I0523 04:09:32.278] •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":9,"skipped":169,"failed":0}
I0523 04:09:32.278] SSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:32.278] ------------------------------
I0523 04:09:32.278] [sig-storage] Secrets 
I0523 04:09:32.278]   should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:32.278]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.278] [BeforeEach] [sig-storage] Secrets
... skipping 10 lines ...
I0523 04:09:32.281] I0523 02:47:51.913949      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.281] [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:32.281]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.281] I0523 02:47:51.916236      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.282] STEP: Creating secret with name secret-test-38e6da4e-2aa9-41f3-b3f2-d33d093f86f5
I0523 04:09:32.282] STEP: Creating a pod to test consume secrets
I0523 04:09:32.282] May 23 02:47:51.923: INFO: Waiting up to 5m0s for pod "pod-secrets-4e3f1ba7-9bef-41d8-ac1d-5f1c999199c6" in namespace "secrets-5054" to be "Succeeded or Failed"
I0523 04:09:32.282] May 23 02:47:51.926: INFO: Pod "pod-secrets-4e3f1ba7-9bef-41d8-ac1d-5f1c999199c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.753502ms
I0523 04:09:32.282] May 23 02:47:53.929: INFO: Pod "pod-secrets-4e3f1ba7-9bef-41d8-ac1d-5f1c999199c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00552421s
I0523 04:09:32.283] STEP: Saw pod success
I0523 04:09:32.283] May 23 02:47:53.929: INFO: Pod "pod-secrets-4e3f1ba7-9bef-41d8-ac1d-5f1c999199c6" satisfied condition "Succeeded or Failed"
I0523 04:09:32.283] May 23 02:47:53.931: INFO: Trying to get logs from node kind-worker pod pod-secrets-4e3f1ba7-9bef-41d8-ac1d-5f1c999199c6 container secret-volume-test: <nil>
I0523 04:09:32.283] STEP: delete the pod
I0523 04:09:32.283] May 23 02:47:53.942: INFO: Waiting for pod pod-secrets-4e3f1ba7-9bef-41d8-ac1d-5f1c999199c6 to disappear
I0523 04:09:32.283] May 23 02:47:53.944: INFO: Pod pod-secrets-4e3f1ba7-9bef-41d8-ac1d-5f1c999199c6 no longer exists
I0523 04:09:32.284] [AfterEach] [sig-storage] Secrets
I0523 04:09:32.284]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.284] May 23 02:47:53.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.284] STEP: Destroying namespace "secrets-5054" for this suite.
I0523 04:09:32.284] •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":10,"skipped":195,"failed":0}
I0523 04:09:32.284] SSSSSSSSSSSSSSSSSSS
I0523 04:09:32.285] ------------------------------
I0523 04:09:32.285] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
I0523 04:09:32.285]   creating/deleting custom resource definition objects works  [Conformance]
I0523 04:09:32.285]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.285] [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 13 lines ...
I0523 04:09:32.288]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.288] May 23 02:47:54.074: INFO: >>> kubeConfig: /tmp/kubeconfig-142995523
I0523 04:09:32.289] [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
I0523 04:09:32.289]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.289] May 23 02:47:55.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.289] STEP: Destroying namespace "custom-resource-definition-1034" for this suite.
I0523 04:09:32.290] •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":292,"completed":11,"skipped":214,"failed":0}
I0523 04:09:32.290] SSSSSSSSS
I0523 04:09:32.290] ------------------------------
I0523 04:09:32.290] [sig-storage] ConfigMap 
I0523 04:09:32.290]   should be consumable from pods in volume [NodeConformance] [Conformance]
I0523 04:09:32.290]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.290] [BeforeEach] [sig-storage] ConfigMap
... skipping 10 lines ...
I0523 04:09:32.293] I0523 02:47:55.226539      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.293] I0523 02:47:55.229475      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.293] [It] should be consumable from pods in volume [NodeConformance] [Conformance]
I0523 04:09:32.293]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.293] STEP: Creating configMap with name configmap-test-volume-9a4c319e-020b-43ed-9f40-36052b1042c6
I0523 04:09:32.294] STEP: Creating a pod to test consume configMaps
I0523 04:09:32.294] May 23 02:47:55.239: INFO: Waiting up to 5m0s for pod "pod-configmaps-48f96f92-fe4b-4921-af53-0aec0e0de2fb" in namespace "configmap-2948" to be "Succeeded or Failed"
I0523 04:09:32.294] May 23 02:47:55.241: INFO: Pod "pod-configmaps-48f96f92-fe4b-4921-af53-0aec0e0de2fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058935ms
I0523 04:09:32.294] May 23 02:47:57.244: INFO: Pod "pod-configmaps-48f96f92-fe4b-4921-af53-0aec0e0de2fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005009679s
I0523 04:09:32.295] May 23 02:47:59.247: INFO: Pod "pod-configmaps-48f96f92-fe4b-4921-af53-0aec0e0de2fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008115492s
I0523 04:09:32.295] STEP: Saw pod success
I0523 04:09:32.295] May 23 02:47:59.247: INFO: Pod "pod-configmaps-48f96f92-fe4b-4921-af53-0aec0e0de2fb" satisfied condition "Succeeded or Failed"
I0523 04:09:32.295] May 23 02:47:59.249: INFO: Trying to get logs from node kind-worker pod pod-configmaps-48f96f92-fe4b-4921-af53-0aec0e0de2fb container configmap-volume-test: <nil>
I0523 04:09:32.295] STEP: delete the pod
I0523 04:09:32.295] May 23 02:47:59.261: INFO: Waiting for pod pod-configmaps-48f96f92-fe4b-4921-af53-0aec0e0de2fb to disappear
I0523 04:09:32.296] May 23 02:47:59.263: INFO: Pod pod-configmaps-48f96f92-fe4b-4921-af53-0aec0e0de2fb no longer exists
I0523 04:09:32.296] [AfterEach] [sig-storage] ConfigMap
I0523 04:09:32.296]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.296] May 23 02:47:59.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.296] STEP: Destroying namespace "configmap-2948" for this suite.
I0523 04:09:32.297] •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":292,"completed":12,"skipped":223,"failed":0}
I0523 04:09:32.297] SSSSSSS
I0523 04:09:32.297] ------------------------------
I0523 04:09:32.297] [sig-storage] ConfigMap 
I0523 04:09:32.297]   optional updates should be reflected in volume [NodeConformance] [Conformance]
I0523 04:09:32.297]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.297] [BeforeEach] [sig-storage] ConfigMap
... skipping 19 lines ...
I0523 04:09:32.301] STEP: Creating configMap with name cm-test-opt-create-6e1ad126-411e-4958-a519-f23ce111b0b2
I0523 04:09:32.301] STEP: waiting to observe update in volume
I0523 04:09:32.301] [AfterEach] [sig-storage] ConfigMap
I0523 04:09:32.302]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.302] May 23 02:48:03.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.302] STEP: Destroying namespace "configmap-27" for this suite.
I0523 04:09:32.302] •{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":13,"skipped":230,"failed":0}
I0523 04:09:32.302] SSSSSSSSSSSSS
I0523 04:09:32.302] ------------------------------
I0523 04:09:32.302] [k8s.io] KubeletManagedEtcHosts 
I0523 04:09:32.303]   should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:32.303]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.303] [BeforeEach] [k8s.io] KubeletManagedEtcHosts
... skipping 56 lines ...
I0523 04:09:32.313] • [SLOW TEST:7.095 seconds]
I0523 04:09:32.314] [k8s.io] KubeletManagedEtcHosts
I0523 04:09:32.314] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
I0523 04:09:32.314]   should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:32.314]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.314] ------------------------------
I0523 04:09:32.314] {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":14,"skipped":243,"failed":0}
I0523 04:09:32.315] SSSS
I0523 04:09:32.315] ------------------------------
I0523 04:09:32.315] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
I0523 04:09:32.315]   should be able to convert from CR v1 to CR v2 [Conformance]
I0523 04:09:32.315]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.315] [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 35 lines ...
I0523 04:09:32.322] • [SLOW TEST:6.887 seconds]
I0523 04:09:32.322] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
I0523 04:09:32.322] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:32.323]   should be able to convert from CR v1 to CR v2 [Conformance]
I0523 04:09:32.323]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.323] ------------------------------
I0523 04:09:32.323] {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":292,"completed":15,"skipped":247,"failed":0}
I0523 04:09:32.323] SSSSSS
I0523 04:09:32.323] ------------------------------
I0523 04:09:32.323] [sig-apps] Job 
I0523 04:09:32.324]   should delete a job [Conformance]
I0523 04:09:32.324]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.324] [BeforeEach] [sig-apps] Job
... skipping 31 lines ...
I0523 04:09:32.330] • [SLOW TEST:48.909 seconds]
I0523 04:09:32.330] [sig-apps] Job
I0523 04:09:32.330] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0523 04:09:32.330]   should delete a job [Conformance]
I0523 04:09:32.330]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.331] ------------------------------
I0523 04:09:32.331] {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":292,"completed":16,"skipped":253,"failed":0}
I0523 04:09:32.331] SSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:32.331] ------------------------------
I0523 04:09:32.331] [sig-api-machinery] Watchers 
I0523 04:09:32.332]   should observe add, update, and delete watch notifications on configmaps [Conformance]
I0523 04:09:32.332]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.332] [BeforeEach] [sig-api-machinery] Watchers
... skipping 40 lines ...
I0523 04:09:32.344] • [SLOW TEST:60.179 seconds]
I0523 04:09:32.344] [sig-api-machinery] Watchers
I0523 04:09:32.344] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:32.344]   should observe add, update, and delete watch notifications on configmaps [Conformance]
I0523 04:09:32.345]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.345] ------------------------------
I0523 04:09:32.345] {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":292,"completed":17,"skipped":275,"failed":0}
I0523 04:09:32.345] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:32.345] ------------------------------
I0523 04:09:32.345] [k8s.io] Lease 
I0523 04:09:32.345]   lease API should be available [Conformance]
I0523 04:09:32.346]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.346] [BeforeEach] [k8s.io] Lease
... skipping 12 lines ...
I0523 04:09:32.348]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.348] I0523 02:50:06.658211      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.348] [AfterEach] [k8s.io] Lease
I0523 04:09:32.349]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.349] May 23 02:50:06.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.349] STEP: Destroying namespace "lease-test-7406" for this suite.
I0523 04:09:32.349] •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":292,"completed":18,"skipped":331,"failed":0}
I0523 04:09:32.349] SSSSS
I0523 04:09:32.349] ------------------------------
I0523 04:09:32.349] [sig-storage] EmptyDir volumes 
I0523 04:09:32.350]   volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:32.350]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.350] [BeforeEach] [sig-storage] EmptyDir volumes
... skipping 9 lines ...
I0523 04:09:32.352] I0523 02:50:06.821535      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.352] I0523 02:50:06.821696      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.352] [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:32.352]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.352] I0523 02:50:06.824248      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.353] STEP: Creating a pod to test emptydir volume type on tmpfs
I0523 04:09:32.353] May 23 02:50:06.829: INFO: Waiting up to 5m0s for pod "pod-28dbf444-c983-4f24-8dda-478d34a581b0" in namespace "emptydir-4957" to be "Succeeded or Failed"
I0523 04:09:32.353] May 23 02:50:06.832: INFO: Pod "pod-28dbf444-c983-4f24-8dda-478d34a581b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.643209ms
I0523 04:09:32.353] May 23 02:50:08.835: INFO: Pod "pod-28dbf444-c983-4f24-8dda-478d34a581b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005510232s
I0523 04:09:32.353] STEP: Saw pod success
I0523 04:09:32.353] May 23 02:50:08.835: INFO: Pod "pod-28dbf444-c983-4f24-8dda-478d34a581b0" satisfied condition "Succeeded or Failed"
I0523 04:09:32.354] May 23 02:50:08.837: INFO: Trying to get logs from node kind-worker pod pod-28dbf444-c983-4f24-8dda-478d34a581b0 container test-container: <nil>
I0523 04:09:32.354] STEP: delete the pod
I0523 04:09:32.354] May 23 02:50:08.855: INFO: Waiting for pod pod-28dbf444-c983-4f24-8dda-478d34a581b0 to disappear
I0523 04:09:32.354] May 23 02:50:08.857: INFO: Pod pod-28dbf444-c983-4f24-8dda-478d34a581b0 no longer exists
I0523 04:09:32.354] [AfterEach] [sig-storage] EmptyDir volumes
I0523 04:09:32.354]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.354] May 23 02:50:08.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.354] STEP: Destroying namespace "emptydir-4957" for this suite.
I0523 04:09:32.355] •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":19,"skipped":336,"failed":0}
I0523 04:09:32.355] SSSSSSSSSSSSSSSSSSS
I0523 04:09:32.355] ------------------------------
I0523 04:09:32.355] [k8s.io] [sig-node] PreStop 
I0523 04:09:32.355]   should call prestop when killing a pod  [Conformance]
I0523 04:09:32.355]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.355] [BeforeEach] [k8s.io] [sig-node] PreStop
... skipping 39 lines ...
I0523 04:09:32.361] • [SLOW TEST:11.165 seconds]
I0523 04:09:32.361] [k8s.io] [sig-node] PreStop
I0523 04:09:32.361] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
I0523 04:09:32.362]   should call prestop when killing a pod  [Conformance]
I0523 04:09:32.362]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.362] ------------------------------
I0523 04:09:32.362] {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":292,"completed":20,"skipped":355,"failed":0}
I0523 04:09:32.362] SSSSSSSSSSSSS
I0523 04:09:32.362] ------------------------------
I0523 04:09:32.362] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] 
I0523 04:09:32.362]   removing taint cancels eviction [Disruptive] [Conformance]
I0523 04:09:32.363]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.363] [BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial]
... skipping 36 lines ...
I0523 04:09:32.369] • [SLOW TEST:135.398 seconds]
I0523 04:09:32.369] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial]
I0523 04:09:32.369] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
I0523 04:09:32.369]   removing taint cancels eviction [Disruptive] [Conformance]
I0523 04:09:32.369]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.370] ------------------------------
I0523 04:09:32.370] {"msg":"PASSED [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]","total":292,"completed":21,"skipped":368,"failed":0}
I0523 04:09:32.370] S
I0523 04:09:32.370] ------------------------------
I0523 04:09:32.370] [k8s.io] Pods 
I0523 04:09:32.370]   should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
I0523 04:09:32.370]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.370] [BeforeEach] [k8s.io] Pods
... skipping 17 lines ...
I0523 04:09:32.373] STEP: submitting the pod to kubernetes
I0523 04:09:32.373] STEP: verifying the pod is in kubernetes
I0523 04:09:32.373] STEP: updating the pod
I0523 04:09:32.374] May 23 02:52:40.078: INFO: Successfully updated pod "pod-update-activedeadlineseconds-713d1914-9f6a-4bde-b01e-811a3bf344af"
I0523 04:09:32.374] May 23 02:52:40.078: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-713d1914-9f6a-4bde-b01e-811a3bf344af" in namespace "pods-7820" to be "terminated due to deadline exceeded"
I0523 04:09:32.374] May 23 02:52:40.083: INFO: Pod "pod-update-activedeadlineseconds-713d1914-9f6a-4bde-b01e-811a3bf344af": Phase="Running", Reason="", readiness=true. Elapsed: 4.545165ms
I0523 04:09:32.374] May 23 02:52:42.086: INFO: Pod "pod-update-activedeadlineseconds-713d1914-9f6a-4bde-b01e-811a3bf344af": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.007457515s
I0523 04:09:32.374] May 23 02:52:42.086: INFO: Pod "pod-update-activedeadlineseconds-713d1914-9f6a-4bde-b01e-811a3bf344af" satisfied condition "terminated due to deadline exceeded"
I0523 04:09:32.374] [AfterEach] [k8s.io] Pods
I0523 04:09:32.374]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.375] May 23 02:52:42.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.375] STEP: Destroying namespace "pods-7820" for this suite.
I0523 04:09:32.375] 
I0523 04:09:32.375] • [SLOW TEST:6.666 seconds]
I0523 04:09:32.375] [k8s.io] Pods
I0523 04:09:32.375] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
I0523 04:09:32.375]   should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
I0523 04:09:32.375]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.375] ------------------------------
I0523 04:09:32.375] {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":292,"completed":22,"skipped":369,"failed":0}
I0523 04:09:32.375] SSSSSSS
I0523 04:09:32.375] ------------------------------
I0523 04:09:32.376] [sig-storage] Projected downwardAPI 
I0523 04:09:32.376]   should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:32.376]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.376] [BeforeEach] [sig-storage] Projected downwardAPI
... skipping 11 lines ...
I0523 04:09:32.377] [BeforeEach] [sig-storage] Projected downwardAPI
I0523 04:09:32.378]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
I0523 04:09:32.378] [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:32.378]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.378] I0523 02:52:42.217511      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.378] STEP: Creating a pod to test downward API volume plugin
I0523 04:09:32.378] May 23 02:52:42.222: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4cb23a52-3df9-4600-9cd2-9ee48aed2e81" in namespace "projected-4444" to be "Succeeded or Failed"
I0523 04:09:32.379] May 23 02:52:42.224: INFO: Pod "downwardapi-volume-4cb23a52-3df9-4600-9cd2-9ee48aed2e81": Phase="Pending", Reason="", readiness=false. Elapsed: 1.956935ms
I0523 04:09:32.379] May 23 02:52:44.227: INFO: Pod "downwardapi-volume-4cb23a52-3df9-4600-9cd2-9ee48aed2e81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005149524s
I0523 04:09:32.379] STEP: Saw pod success
I0523 04:09:32.379] May 23 02:52:44.227: INFO: Pod "downwardapi-volume-4cb23a52-3df9-4600-9cd2-9ee48aed2e81" satisfied condition "Succeeded or Failed"
I0523 04:09:32.379] May 23 02:52:44.230: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-4cb23a52-3df9-4600-9cd2-9ee48aed2e81 container client-container: <nil>
I0523 04:09:32.380] STEP: delete the pod
I0523 04:09:32.380] May 23 02:52:44.254: INFO: Waiting for pod downwardapi-volume-4cb23a52-3df9-4600-9cd2-9ee48aed2e81 to disappear
I0523 04:09:32.380] May 23 02:52:44.256: INFO: Pod downwardapi-volume-4cb23a52-3df9-4600-9cd2-9ee48aed2e81 no longer exists
I0523 04:09:32.380] [AfterEach] [sig-storage] Projected downwardAPI
I0523 04:09:32.380]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.380] May 23 02:52:44.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.380] STEP: Destroying namespace "projected-4444" for this suite.
I0523 04:09:32.380] •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":23,"skipped":376,"failed":0}
I0523 04:09:32.381] 
I0523 04:09:32.381] ------------------------------
I0523 04:09:32.381] [sig-storage] Projected configMap 
I0523 04:09:32.381]   should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
I0523 04:09:32.381]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.381] [BeforeEach] [sig-storage] Projected configMap
... skipping 10 lines ...
I0523 04:09:32.383] I0523 02:52:44.383670      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.383] [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
I0523 04:09:32.383]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.384] I0523 02:52:44.385894      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.384] STEP: Creating configMap with name projected-configmap-test-volume-map-2e67cd7a-0dea-4153-9dc8-a2f768343598
I0523 04:09:32.384] STEP: Creating a pod to test consume configMaps
I0523 04:09:32.384] May 23 02:52:44.393: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-36a0ac80-18b8-4af4-a0bc-a7817f0dc30c" in namespace "projected-3219" to be "Succeeded or Failed"
I0523 04:09:32.384] May 23 02:52:44.395: INFO: Pod "pod-projected-configmaps-36a0ac80-18b8-4af4-a0bc-a7817f0dc30c": Phase="Pending", Reason="", readiness=false. Elapsed: 1.758616ms
I0523 04:09:32.385] May 23 02:52:46.403: INFO: Pod "pod-projected-configmaps-36a0ac80-18b8-4af4-a0bc-a7817f0dc30c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009234532s
I0523 04:09:32.385] STEP: Saw pod success
I0523 04:09:32.385] May 23 02:52:46.403: INFO: Pod "pod-projected-configmaps-36a0ac80-18b8-4af4-a0bc-a7817f0dc30c" satisfied condition "Succeeded or Failed"
I0523 04:09:32.385] May 23 02:52:46.405: INFO: Trying to get logs from node kind-worker2 pod pod-projected-configmaps-36a0ac80-18b8-4af4-a0bc-a7817f0dc30c container projected-configmap-volume-test: <nil>
I0523 04:09:32.385] STEP: delete the pod
I0523 04:09:32.385] May 23 02:52:46.418: INFO: Waiting for pod pod-projected-configmaps-36a0ac80-18b8-4af4-a0bc-a7817f0dc30c to disappear
I0523 04:09:32.386] May 23 02:52:46.420: INFO: Pod pod-projected-configmaps-36a0ac80-18b8-4af4-a0bc-a7817f0dc30c no longer exists
I0523 04:09:32.386] [AfterEach] [sig-storage] Projected configMap
I0523 04:09:32.386]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.386] May 23 02:52:46.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.386] STEP: Destroying namespace "projected-3219" for this suite.
I0523 04:09:32.386] •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":292,"completed":24,"skipped":376,"failed":0}
I0523 04:09:32.386] SSSSSSSSSSSSSSSSSSSSS
I0523 04:09:32.387] ------------------------------
I0523 04:09:32.387] [sig-cli] Kubectl client Kubectl replace 
I0523 04:09:32.387]   should update a single-container pod's image  [Conformance]
I0523 04:09:32.387]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.387] [BeforeEach] [sig-cli] Kubectl client
... skipping 47 lines ...
I0523 04:09:32.400] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
I0523 04:09:32.400]   Kubectl replace
I0523 04:09:32.400]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1560
I0523 04:09:32.401]     should update a single-container pod's image  [Conformance]
I0523 04:09:32.401]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.401] ------------------------------
I0523 04:09:32.401] {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":292,"completed":25,"skipped":397,"failed":0}
I0523 04:09:32.401] SS
I0523 04:09:32.401] ------------------------------
I0523 04:09:32.401] [k8s.io] Probing container 
I0523 04:09:32.401]   should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
I0523 04:09:32.402]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.402] [BeforeEach] [k8s.io] Probing container
... skipping 27 lines ...
I0523 04:09:32.406] • [SLOW TEST:24.185 seconds]
I0523 04:09:32.406] [k8s.io] Probing container
I0523 04:09:32.407] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
I0523 04:09:32.407]   should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
I0523 04:09:32.407]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.407] ------------------------------
I0523 04:09:32.407] {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":292,"completed":26,"skipped":399,"failed":0}
I0523 04:09:32.407] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:32.407] ------------------------------
I0523 04:09:32.407] [sig-node] Downward API 
I0523 04:09:32.407]   should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
I0523 04:09:32.408]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.408] [BeforeEach] [sig-node] Downward API
... skipping 9 lines ...
I0523 04:09:32.409] I0523 02:53:23.175097      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.410] I0523 02:53:23.175126      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.410] [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
I0523 04:09:32.410]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.410] I0523 02:53:23.177384      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.410] STEP: Creating a pod to test downward api env vars
I0523 04:09:32.410] May 23 02:53:23.182: INFO: Waiting up to 5m0s for pod "downward-api-a28e5cef-4533-408c-9554-8d8d0f344689" in namespace "downward-api-2393" to be "Succeeded or Failed"
I0523 04:09:32.411] May 23 02:53:23.185: INFO: Pod "downward-api-a28e5cef-4533-408c-9554-8d8d0f344689": Phase="Pending", Reason="", readiness=false. Elapsed: 2.79427ms
I0523 04:09:32.411] May 23 02:53:25.187: INFO: Pod "downward-api-a28e5cef-4533-408c-9554-8d8d0f344689": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005745812s
I0523 04:09:32.411] STEP: Saw pod success
I0523 04:09:32.411] May 23 02:53:25.188: INFO: Pod "downward-api-a28e5cef-4533-408c-9554-8d8d0f344689" satisfied condition "Succeeded or Failed"
I0523 04:09:32.411] May 23 02:53:25.190: INFO: Trying to get logs from node kind-worker pod downward-api-a28e5cef-4533-408c-9554-8d8d0f344689 container dapi-container: <nil>
I0523 04:09:32.411] STEP: delete the pod
I0523 04:09:32.412] May 23 02:53:25.215: INFO: Waiting for pod downward-api-a28e5cef-4533-408c-9554-8d8d0f344689 to disappear
I0523 04:09:32.412] May 23 02:53:25.222: INFO: Pod downward-api-a28e5cef-4533-408c-9554-8d8d0f344689 no longer exists
I0523 04:09:32.412] [AfterEach] [sig-node] Downward API
I0523 04:09:32.412]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.412] May 23 02:53:25.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.412] STEP: Destroying namespace "downward-api-2393" for this suite.
I0523 04:09:32.412] •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":292,"completed":27,"skipped":478,"failed":0}
I0523 04:09:32.413] SSSSSSSSSSSSSS
I0523 04:09:32.413] ------------------------------
I0523 04:09:32.413] [sig-storage] Projected downwardAPI 
I0523 04:09:32.413]   should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
I0523 04:09:32.413]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.413] [BeforeEach] [sig-storage] Projected downwardAPI
... skipping 11 lines ...
I0523 04:09:32.415] [BeforeEach] [sig-storage] Projected downwardAPI
I0523 04:09:32.415]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
I0523 04:09:32.415] [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
I0523 04:09:32.415]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.415] I0523 02:53:25.356754      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.415] STEP: Creating a pod to test downward API volume plugin
I0523 04:09:32.416] May 23 02:53:25.362: INFO: Waiting up to 5m0s for pod "downwardapi-volume-706ccacc-f063-4709-b4a6-643036fe644c" in namespace "projected-385" to be "Succeeded or Failed"
I0523 04:09:32.416] May 23 02:53:25.363: INFO: Pod "downwardapi-volume-706ccacc-f063-4709-b4a6-643036fe644c": Phase="Pending", Reason="", readiness=false. Elapsed: 1.752605ms
I0523 04:09:32.416] May 23 02:53:27.368: INFO: Pod "downwardapi-volume-706ccacc-f063-4709-b4a6-643036fe644c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00642783s
I0523 04:09:32.416] STEP: Saw pod success
I0523 04:09:32.416] May 23 02:53:27.368: INFO: Pod "downwardapi-volume-706ccacc-f063-4709-b4a6-643036fe644c" satisfied condition "Succeeded or Failed"
I0523 04:09:32.416] May 23 02:53:27.371: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-706ccacc-f063-4709-b4a6-643036fe644c container client-container: <nil>
I0523 04:09:32.417] STEP: delete the pod
I0523 04:09:32.417] May 23 02:53:27.384: INFO: Waiting for pod downwardapi-volume-706ccacc-f063-4709-b4a6-643036fe644c to disappear
I0523 04:09:32.417] May 23 02:53:27.387: INFO: Pod downwardapi-volume-706ccacc-f063-4709-b4a6-643036fe644c no longer exists
I0523 04:09:32.417] [AfterEach] [sig-storage] Projected downwardAPI
I0523 04:09:32.417]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.417] May 23 02:53:27.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.418] STEP: Destroying namespace "projected-385" for this suite.
I0523 04:09:32.418] •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":292,"completed":28,"skipped":492,"failed":0}
I0523 04:09:32.418] SSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:32.418] ------------------------------
I0523 04:09:32.418] [sig-storage] EmptyDir wrapper volumes 
I0523 04:09:32.418]   should not cause race condition when used for configmaps [Serial] [Conformance]
I0523 04:09:32.418]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.419] [BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 66 lines ...
I0523 04:09:32.433] • [SLOW TEST:69.594 seconds]
I0523 04:09:32.433] [sig-storage] EmptyDir wrapper volumes
I0523 04:09:32.433] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
I0523 04:09:32.433]   should not cause race condition when used for configmaps [Serial] [Conformance]
I0523 04:09:32.433]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.433] ------------------------------
I0523 04:09:32.434] {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":292,"completed":29,"skipped":518,"failed":0}
I0523 04:09:32.434] SSSSSSSSS
I0523 04:09:32.434] ------------------------------
I0523 04:09:32.434] [sig-storage] Subpath Atomic writer volumes 
I0523 04:09:32.434]   should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
I0523 04:09:32.434]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.434] [BeforeEach] [sig-storage] Subpath
... skipping 13 lines ...
I0523 04:09:32.437] STEP: Setting up data
I0523 04:09:32.437] I0523 02:54:37.120020      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.437] [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
I0523 04:09:32.437]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.437] STEP: Creating pod pod-subpath-test-configmap-wz9l
I0523 04:09:32.437] STEP: Creating a pod to test atomic-volume-subpath
I0523 04:09:32.438] May 23 02:54:37.129: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-wz9l" in namespace "subpath-7341" to be "Succeeded or Failed"
I0523 04:09:32.438] May 23 02:54:37.132: INFO: Pod "pod-subpath-test-configmap-wz9l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111162ms
I0523 04:09:32.438] May 23 02:54:39.134: INFO: Pod "pod-subpath-test-configmap-wz9l": Phase="Running", Reason="", readiness=true. Elapsed: 2.004788129s
I0523 04:09:32.438] May 23 02:54:41.137: INFO: Pod "pod-subpath-test-configmap-wz9l": Phase="Running", Reason="", readiness=true. Elapsed: 4.007415144s
I0523 04:09:32.438] May 23 02:54:43.140: INFO: Pod "pod-subpath-test-configmap-wz9l": Phase="Running", Reason="", readiness=true. Elapsed: 6.010159896s
I0523 04:09:32.439] May 23 02:54:45.143: INFO: Pod "pod-subpath-test-configmap-wz9l": Phase="Running", Reason="", readiness=true. Elapsed: 8.013112376s
I0523 04:09:32.439] May 23 02:54:47.145: INFO: Pod "pod-subpath-test-configmap-wz9l": Phase="Running", Reason="", readiness=true. Elapsed: 10.015484889s
... skipping 2 lines ...
I0523 04:09:32.439] May 23 02:54:53.153: INFO: Pod "pod-subpath-test-configmap-wz9l": Phase="Running", Reason="", readiness=true. Elapsed: 16.024059531s
I0523 04:09:32.440] May 23 02:54:55.156: INFO: Pod "pod-subpath-test-configmap-wz9l": Phase="Running", Reason="", readiness=true. Elapsed: 18.026799676s
I0523 04:09:32.440] May 23 02:54:57.159: INFO: Pod "pod-subpath-test-configmap-wz9l": Phase="Running", Reason="", readiness=true. Elapsed: 20.02957513s
I0523 04:09:32.440] May 23 02:54:59.162: INFO: Pod "pod-subpath-test-configmap-wz9l": Phase="Running", Reason="", readiness=true. Elapsed: 22.032658517s
I0523 04:09:32.440] May 23 02:55:01.165: INFO: Pod "pod-subpath-test-configmap-wz9l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.035727863s
I0523 04:09:32.440] STEP: Saw pod success
I0523 04:09:32.440] May 23 02:55:01.165: INFO: Pod "pod-subpath-test-configmap-wz9l" satisfied condition "Succeeded or Failed"
I0523 04:09:32.441] May 23 02:55:01.167: INFO: Trying to get logs from node kind-worker pod pod-subpath-test-configmap-wz9l container test-container-subpath-configmap-wz9l: <nil>
I0523 04:09:32.441] STEP: delete the pod
I0523 04:09:32.441] May 23 02:55:01.187: INFO: Waiting for pod pod-subpath-test-configmap-wz9l to disappear
I0523 04:09:32.441] May 23 02:55:01.189: INFO: Pod pod-subpath-test-configmap-wz9l no longer exists
I0523 04:09:32.441] STEP: Deleting pod pod-subpath-test-configmap-wz9l
I0523 04:09:32.441] May 23 02:55:01.189: INFO: Deleting pod "pod-subpath-test-configmap-wz9l" in namespace "subpath-7341"
... skipping 7 lines ...
I0523 04:09:32.442] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
I0523 04:09:32.442]   Atomic writer volumes
I0523 04:09:32.442]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
I0523 04:09:32.443]     should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
I0523 04:09:32.443]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.443] ------------------------------
I0523 04:09:32.443] {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":292,"completed":30,"skipped":527,"failed":0}
I0523 04:09:32.443] [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
I0523 04:09:32.443]   Should recreate evicted statefulset [Conformance]
I0523 04:09:32.443]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.443] [BeforeEach] [sig-apps] StatefulSet
I0523 04:09:32.443]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
I0523 04:09:32.444] STEP: Creating a kubernetes client
... skipping 18 lines ...
I0523 04:09:32.447] STEP: Creating pod with conflicting port in namespace statefulset-8505
I0523 04:09:32.447] STEP: Creating statefulset with conflicting port in namespace statefulset-8505
I0523 04:09:32.447] STEP: Waiting until pod test-pod will start running in namespace statefulset-8505
I0523 04:09:32.447] STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-8505
I0523 04:09:32.447] May 23 02:55:09.344: INFO: Observed stateful pod in namespace: statefulset-8505, name: ss-0, uid: 3548c654-c634-41f9-bac0-ef6392708195, status phase: Pending. Waiting for statefulset controller to delete.
I0523 04:09:32.447] I0523 02:55:09.344453      17 retrywatcher.go:247] Starting RetryWatcher.
I0523 04:09:32.448] May 23 02:55:09.538: INFO: Observed stateful pod in namespace: statefulset-8505, name: ss-0, uid: 3548c654-c634-41f9-bac0-ef6392708195, status phase: Failed. Waiting for statefulset controller to delete.
I0523 04:09:32.448] May 23 02:55:09.543: INFO: Observed stateful pod in namespace: statefulset-8505, name: ss-0, uid: 3548c654-c634-41f9-bac0-ef6392708195, status phase: Failed. Waiting for statefulset controller to delete.
I0523 04:09:32.448] May 23 02:55:09.545: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-8505
I0523 04:09:32.448] STEP: Removing pod with conflicting port in namespace statefulset-8505
I0523 04:09:32.448] I0523 02:55:09.545692      17 retrywatcher.go:147] Stopping RetryWatcher.
I0523 04:09:32.448] I0523 02:55:09.545919      17 retrywatcher.go:275] Stopping RetryWatcher.
I0523 04:09:32.449] STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-8505 and will be in running state
I0523 04:09:32.449] [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
... skipping 12 lines ...
I0523 04:09:32.450] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0523 04:09:32.451]   [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
I0523 04:09:32.451]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
I0523 04:09:32.451]     Should recreate evicted statefulset [Conformance]
I0523 04:09:32.451]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.451] ------------------------------
I0523 04:09:32.451] {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":292,"completed":31,"skipped":527,"failed":0}
I0523 04:09:32.451] SSSSSSSSSSSSSSSSSS
I0523 04:09:32.451] ------------------------------
I0523 04:09:32.452] [sig-storage] ConfigMap 
I0523 04:09:32.452]   should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:32.452]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.452] [BeforeEach] [sig-storage] ConfigMap
... skipping 10 lines ...
I0523 04:09:32.454] I0523 02:55:33.724313      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.454] [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:32.454]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.454] I0523 02:55:33.726361      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.455] STEP: Creating configMap with name configmap-test-volume-96641ef3-ff59-454e-9979-aba6f75214a1
I0523 04:09:32.455] STEP: Creating a pod to test consume configMaps
I0523 04:09:32.455] May 23 02:55:33.734: INFO: Waiting up to 5m0s for pod "pod-configmaps-f46e7720-0098-469e-bb45-0b334b86f240" in namespace "configmap-1235" to be "Succeeded or Failed"
I0523 04:09:32.455] May 23 02:55:33.736: INFO: Pod "pod-configmaps-f46e7720-0098-469e-bb45-0b334b86f240": Phase="Pending", Reason="", readiness=false. Elapsed: 1.722553ms
I0523 04:09:32.455] May 23 02:55:35.739: INFO: Pod "pod-configmaps-f46e7720-0098-469e-bb45-0b334b86f240": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004786568s
I0523 04:09:32.455] STEP: Saw pod success
I0523 04:09:32.456] May 23 02:55:35.739: INFO: Pod "pod-configmaps-f46e7720-0098-469e-bb45-0b334b86f240" satisfied condition "Succeeded or Failed"
I0523 04:09:32.456] May 23 02:55:35.741: INFO: Trying to get logs from node kind-worker pod pod-configmaps-f46e7720-0098-469e-bb45-0b334b86f240 container configmap-volume-test: <nil>
I0523 04:09:32.456] STEP: delete the pod
I0523 04:09:32.456] May 23 02:55:35.760: INFO: Waiting for pod pod-configmaps-f46e7720-0098-469e-bb45-0b334b86f240 to disappear
I0523 04:09:32.456] May 23 02:55:35.763: INFO: Pod pod-configmaps-f46e7720-0098-469e-bb45-0b334b86f240 no longer exists
I0523 04:09:32.456] [AfterEach] [sig-storage] ConfigMap
I0523 04:09:32.457]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.457] May 23 02:55:35.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.457] STEP: Destroying namespace "configmap-1235" for this suite.
I0523 04:09:32.457] •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":32,"skipped":545,"failed":0}
I0523 04:09:32.457] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:32.457] ------------------------------
I0523 04:09:32.457] [k8s.io] Probing container 
I0523 04:09:32.458]   should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
I0523 04:09:32.458]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.458] [BeforeEach] [k8s.io] Probing container
... skipping 26 lines ...
I0523 04:09:32.462] • [SLOW TEST:242.518 seconds]
I0523 04:09:32.462] [k8s.io] Probing container
I0523 04:09:32.462] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
I0523 04:09:32.462]   should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
I0523 04:09:32.462]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.462] ------------------------------
I0523 04:09:32.462] {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":292,"completed":33,"skipped":609,"failed":0}
I0523 04:09:32.462] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:32.463] ------------------------------
I0523 04:09:32.463] [sig-api-machinery] Garbage collector 
I0523 04:09:32.463]   should not be blocked by dependency circle [Conformance]
I0523 04:09:32.463]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.463] [BeforeEach] [sig-api-machinery] Garbage collector
... skipping 22 lines ...
I0523 04:09:32.466] • [SLOW TEST:5.173 seconds]
I0523 04:09:32.467] [sig-api-machinery] Garbage collector
I0523 04:09:32.467] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:32.467]   should not be blocked by dependency circle [Conformance]
I0523 04:09:32.467]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.467] ------------------------------
I0523 04:09:32.467] {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":292,"completed":34,"skipped":652,"failed":0}
I0523 04:09:32.467] SSSSSSSSSSSSSSSS
I0523 04:09:32.467] ------------------------------
I0523 04:09:32.467] [sig-storage] EmptyDir volumes 
I0523 04:09:32.467]   should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:32.467]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.468] [BeforeEach] [sig-storage] EmptyDir volumes
... skipping 9 lines ...
I0523 04:09:32.469] I0523 02:59:43.586446      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.469] I0523 02:59:43.586476      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.469] I0523 02:59:43.589001      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.470] [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:32.470]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.470] STEP: Creating a pod to test emptydir 0777 on tmpfs
I0523 04:09:32.470] May 23 02:59:43.593: INFO: Waiting up to 5m0s for pod "pod-1cdcbb4d-1510-45a9-b4ba-ebb93265be72" in namespace "emptydir-6150" to be "Succeeded or Failed"
I0523 04:09:32.470] May 23 02:59:43.595: INFO: Pod "pod-1cdcbb4d-1510-45a9-b4ba-ebb93265be72": Phase="Pending", Reason="", readiness=false. Elapsed: 1.88285ms
I0523 04:09:32.470] May 23 02:59:45.599: INFO: Pod "pod-1cdcbb4d-1510-45a9-b4ba-ebb93265be72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005389307s
I0523 04:09:32.470] May 23 02:59:47.602: INFO: Pod "pod-1cdcbb4d-1510-45a9-b4ba-ebb93265be72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00846774s
I0523 04:09:32.471] STEP: Saw pod success
I0523 04:09:32.471] May 23 02:59:47.602: INFO: Pod "pod-1cdcbb4d-1510-45a9-b4ba-ebb93265be72" satisfied condition "Succeeded or Failed"
I0523 04:09:32.471] May 23 02:59:47.604: INFO: Trying to get logs from node kind-worker pod pod-1cdcbb4d-1510-45a9-b4ba-ebb93265be72 container test-container: <nil>
I0523 04:09:32.471] STEP: delete the pod
I0523 04:09:32.471] May 23 02:59:47.625: INFO: Waiting for pod pod-1cdcbb4d-1510-45a9-b4ba-ebb93265be72 to disappear
I0523 04:09:32.471] May 23 02:59:47.627: INFO: Pod pod-1cdcbb4d-1510-45a9-b4ba-ebb93265be72 no longer exists
I0523 04:09:32.471] [AfterEach] [sig-storage] EmptyDir volumes
I0523 04:09:32.472]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.472] May 23 02:59:47.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.472] STEP: Destroying namespace "emptydir-6150" for this suite.
I0523 04:09:32.472] •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":35,"skipped":668,"failed":0}
I0523 04:09:32.472] SSSSSS
I0523 04:09:32.472] ------------------------------
I0523 04:09:32.472] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
I0523 04:09:32.473]   removes definition from spec when one version gets changed to not be served [Conformance]
I0523 04:09:32.473]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.473] [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 24 lines ...
I0523 04:09:32.476] • [SLOW TEST:15.267 seconds]
I0523 04:09:32.477] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
I0523 04:09:32.477] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:32.477]   removes definition from spec when one version gets changed to not be served [Conformance]
I0523 04:09:32.477]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.477] ------------------------------
I0523 04:09:32.478] {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":292,"completed":36,"skipped":674,"failed":0}
I0523 04:09:32.478] SSSSSSSSSSSSSSSSSSSSS
I0523 04:09:32.478] ------------------------------
I0523 04:09:32.478] [sig-network] Services 
I0523 04:09:32.478]   should find a service from listing all namespaces [Conformance]
I0523 04:09:32.478]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.478] [BeforeEach] [sig-network] Services
... skipping 17 lines ...
I0523 04:09:32.481] [AfterEach] [sig-network] Services
I0523 04:09:32.482]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.482] May 23 03:00:03.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.482] STEP: Destroying namespace "services-1330" for this suite.
I0523 04:09:32.482] [AfterEach] [sig-network] Services
I0523 04:09:32.482]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:812
I0523 04:09:32.482] •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":292,"completed":37,"skipped":695,"failed":0}
I0523 04:09:32.482] SSS
I0523 04:09:32.483] ------------------------------
I0523 04:09:32.483] [sig-api-machinery] Garbage collector 
I0523 04:09:32.483]   should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
I0523 04:09:32.483]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.483] [BeforeEach] [sig-api-machinery] Garbage collector
... skipping 54 lines ...
I0523 04:09:32.491] • [SLOW TEST:10.255 seconds]
I0523 04:09:32.491] [sig-api-machinery] Garbage collector
I0523 04:09:32.491] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:32.491]   should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
I0523 04:09:32.492]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.492] ------------------------------
I0523 04:09:32.492] {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":292,"completed":38,"skipped":698,"failed":0}
I0523 04:09:32.492] SSSSSSSS
I0523 04:09:32.492] ------------------------------
I0523 04:09:32.492] [sig-network] Services 
I0523 04:09:32.492]   should be able to change the type from ClusterIP to ExternalName [Conformance]
I0523 04:09:32.493]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.493] [BeforeEach] [sig-network] Services
... skipping 47 lines ...
I0523 04:09:32.501] • [SLOW TEST:9.505 seconds]
I0523 04:09:32.501] [sig-network] Services
I0523 04:09:32.501] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
I0523 04:09:32.501]   should be able to change the type from ClusterIP to ExternalName [Conformance]
I0523 04:09:32.501]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.501] ------------------------------
I0523 04:09:32.502] {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":292,"completed":39,"skipped":706,"failed":0}
I0523 04:09:32.502] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:32.502] ------------------------------
I0523 04:09:32.502] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
I0523 04:09:32.502]   works for CRD preserving unknown fields at the schema root [Conformance]
I0523 04:09:32.502]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.502] [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 37 lines ...
I0523 04:09:32.508] • [SLOW TEST:7.166 seconds]
I0523 04:09:32.508] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
I0523 04:09:32.508] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:32.509]   works for CRD preserving unknown fields at the schema root [Conformance]
I0523 04:09:32.509]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.509] ------------------------------
I0523 04:09:32.509] {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":292,"completed":40,"skipped":742,"failed":0}
I0523 04:09:32.509] SSSSSSSS
I0523 04:09:32.509] ------------------------------
I0523 04:09:32.509] [k8s.io] Variable Expansion 
I0523 04:09:32.510]   should allow substituting values in a volume subpath [sig-storage] [Conformance]
I0523 04:09:32.510]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.510] [BeforeEach] [k8s.io] Variable Expansion
... skipping 9 lines ...
I0523 04:09:32.512] I0523 03:00:30.080747      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.512] I0523 03:00:30.080775      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.512] [It] should allow substituting values in a volume subpath [sig-storage] [Conformance]
I0523 04:09:32.512]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.513] I0523 03:00:30.083263      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.513] STEP: Creating a pod to test substitution in volume subpath
I0523 04:09:32.513] May 23 03:00:30.088: INFO: Waiting up to 5m0s for pod "var-expansion-dc14ea6a-f501-43f8-89aa-c180c3f6aba7" in namespace "var-expansion-5416" to be "Succeeded or Failed"
I0523 04:09:32.513] May 23 03:00:30.090: INFO: Pod "var-expansion-dc14ea6a-f501-43f8-89aa-c180c3f6aba7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061695ms
I0523 04:09:32.513] May 23 03:00:32.093: INFO: Pod "var-expansion-dc14ea6a-f501-43f8-89aa-c180c3f6aba7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005182497s
I0523 04:09:32.514] STEP: Saw pod success
I0523 04:09:32.514] May 23 03:00:32.094: INFO: Pod "var-expansion-dc14ea6a-f501-43f8-89aa-c180c3f6aba7" satisfied condition "Succeeded or Failed"
I0523 04:09:32.514] May 23 03:00:32.096: INFO: Trying to get logs from node kind-worker pod var-expansion-dc14ea6a-f501-43f8-89aa-c180c3f6aba7 container dapi-container: <nil>
I0523 04:09:32.514] STEP: delete the pod
I0523 04:09:32.514] May 23 03:00:32.106: INFO: Waiting for pod var-expansion-dc14ea6a-f501-43f8-89aa-c180c3f6aba7 to disappear
I0523 04:09:32.514] May 23 03:00:32.109: INFO: Pod var-expansion-dc14ea6a-f501-43f8-89aa-c180c3f6aba7 no longer exists
I0523 04:09:32.514] [AfterEach] [k8s.io] Variable Expansion
I0523 04:09:32.515]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.515] May 23 03:00:32.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.515] STEP: Destroying namespace "var-expansion-5416" for this suite.
I0523 04:09:32.515] •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":292,"completed":41,"skipped":750,"failed":0}
I0523 04:09:32.515] SSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:32.515] ------------------------------
I0523 04:09:32.515] [k8s.io] Probing container 
I0523 04:09:32.515]   should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
I0523 04:09:32.516]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.516] [BeforeEach] [k8s.io] Probing container
... skipping 26 lines ...
I0523 04:09:32.520] • [SLOW TEST:242.524 seconds]
I0523 04:09:32.520] [k8s.io] Probing container
I0523 04:09:32.520] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
I0523 04:09:32.520]   should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
I0523 04:09:32.520]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.520] ------------------------------
I0523 04:09:32.521] {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":292,"completed":42,"skipped":775,"failed":0}
I0523 04:09:32.521] SSSS
I0523 04:09:32.521] ------------------------------
I0523 04:09:32.521] [sig-storage] EmptyDir volumes 
I0523 04:09:32.521]   volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:32.521]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.521] [BeforeEach] [sig-storage] EmptyDir volumes
... skipping 9 lines ...
I0523 04:09:32.523] I0523 03:04:34.765990      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.523] I0523 03:04:34.766025      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.523] [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:32.523]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.524] I0523 03:04:34.768425      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.524] STEP: Creating a pod to test emptydir volume type on node default medium
I0523 04:09:32.524] May 23 03:04:34.776: INFO: Waiting up to 5m0s for pod "pod-257ce7ef-3aa2-4d7b-a71a-94c5f87f8980" in namespace "emptydir-3199" to be "Succeeded or Failed"
I0523 04:09:32.524] May 23 03:04:34.778: INFO: Pod "pod-257ce7ef-3aa2-4d7b-a71a-94c5f87f8980": Phase="Pending", Reason="", readiness=false. Elapsed: 1.955792ms
I0523 04:09:32.524] May 23 03:04:36.781: INFO: Pod "pod-257ce7ef-3aa2-4d7b-a71a-94c5f87f8980": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004917692s
I0523 04:09:32.524] May 23 03:04:38.784: INFO: Pod "pod-257ce7ef-3aa2-4d7b-a71a-94c5f87f8980": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007919752s
I0523 04:09:32.525] STEP: Saw pod success
I0523 04:09:32.525] May 23 03:04:38.784: INFO: Pod "pod-257ce7ef-3aa2-4d7b-a71a-94c5f87f8980" satisfied condition "Succeeded or Failed"
I0523 04:09:32.525] May 23 03:04:38.786: INFO: Trying to get logs from node kind-worker pod pod-257ce7ef-3aa2-4d7b-a71a-94c5f87f8980 container test-container: <nil>
I0523 04:09:32.525] STEP: delete the pod
I0523 04:09:32.525] May 23 03:04:38.807: INFO: Waiting for pod pod-257ce7ef-3aa2-4d7b-a71a-94c5f87f8980 to disappear
I0523 04:09:32.525] May 23 03:04:38.809: INFO: Pod pod-257ce7ef-3aa2-4d7b-a71a-94c5f87f8980 no longer exists
I0523 04:09:32.525] [AfterEach] [sig-storage] EmptyDir volumes
I0523 04:09:32.526]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.526] May 23 03:04:38.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.526] STEP: Destroying namespace "emptydir-3199" for this suite.
I0523 04:09:32.526] •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":43,"skipped":779,"failed":0}
I0523 04:09:32.526] S
I0523 04:09:32.526] ------------------------------
I0523 04:09:32.526] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
I0523 04:09:32.527]   should mutate custom resource with pruning [Conformance]
I0523 04:09:32.527]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.527] [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 36 lines ...
I0523 04:09:32.534] • [SLOW TEST:6.891 seconds]
I0523 04:09:32.534] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0523 04:09:32.534] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:32.534]   should mutate custom resource with pruning [Conformance]
I0523 04:09:32.534]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.534] ------------------------------
I0523 04:09:32.535] {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":292,"completed":44,"skipped":780,"failed":0}
I0523 04:09:32.535] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:32.535] ------------------------------
I0523 04:09:32.535] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
I0523 04:09:32.535]   patching/updating a validating webhook should work [Conformance]
I0523 04:09:32.535]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.535] [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 39 lines ...
I0523 04:09:32.542] • [SLOW TEST:5.697 seconds]
I0523 04:09:32.542] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0523 04:09:32.542] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:32.542]   patching/updating a validating webhook should work [Conformance]
I0523 04:09:32.543]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.543] ------------------------------
I0523 04:09:32.543] {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":292,"completed":45,"skipped":877,"failed":0}
I0523 04:09:32.543] SS
I0523 04:09:32.543] ------------------------------
I0523 04:09:32.543] [sig-storage] Projected downwardAPI 
I0523 04:09:32.543]   should update labels on modification [NodeConformance] [Conformance]
I0523 04:09:32.543]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.543] [BeforeEach] [sig-storage] Projected downwardAPI
... skipping 16 lines ...
I0523 04:09:32.546] STEP: Creating the pod
I0523 04:09:32.546] May 23 03:04:54.076: INFO: Successfully updated pod "labelsupdatece629d24-76b0-4f5f-b3f4-42cc47392aec"
I0523 04:09:32.546] [AfterEach] [sig-storage] Projected downwardAPI
I0523 04:09:32.546]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.546] May 23 03:04:56.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.546] STEP: Destroying namespace "projected-5733" for this suite.
I0523 04:09:32.547] •{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":292,"completed":46,"skipped":879,"failed":0}
I0523 04:09:32.547] SSSSS
I0523 04:09:32.547] ------------------------------
I0523 04:09:32.547] [sig-storage] Secrets 
I0523 04:09:32.547]   optional updates should be reflected in volume [NodeConformance] [Conformance]
I0523 04:09:32.547]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.547] [BeforeEach] [sig-storage] Secrets
... skipping 19 lines ...
I0523 04:09:32.551] STEP: Creating secret with name s-test-opt-create-190f07e5-0ba4-4d32-9408-1c1ffb344e6d
I0523 04:09:32.551] STEP: waiting to observe update in volume
I0523 04:09:32.551] [AfterEach] [sig-storage] Secrets
I0523 04:09:32.551]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.551] May 23 03:05:00.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.552] STEP: Destroying namespace "secrets-6370" for this suite.
I0523 04:09:32.552] •{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":47,"skipped":884,"failed":0}
I0523 04:09:32.552] S
I0523 04:09:32.552] ------------------------------
I0523 04:09:32.552] [sig-network] DNS 
I0523 04:09:32.552]   should provide DNS for the cluster  [Conformance]
I0523 04:09:32.552]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.553] [BeforeEach] [sig-network] DNS
... skipping 30 lines ...
I0523 04:09:32.558] • [SLOW TEST:8.173 seconds]
I0523 04:09:32.558] [sig-network] DNS
I0523 04:09:32.558] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
I0523 04:09:32.558]   should provide DNS for the cluster  [Conformance]
I0523 04:09:32.558]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.558] ------------------------------
I0523 04:09:32.558] {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":292,"completed":48,"skipped":885,"failed":0}
I0523 04:09:32.559] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:32.559] ------------------------------
I0523 04:09:32.559] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
I0523 04:09:32.559]   works for multiple CRDs of different groups [Conformance]
I0523 04:09:32.559]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.559] [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 22 lines ...
I0523 04:09:32.563] • [SLOW TEST:14.279 seconds]
I0523 04:09:32.563] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
I0523 04:09:32.563] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:32.563]   works for multiple CRDs of different groups [Conformance]
I0523 04:09:32.563]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.563] ------------------------------
I0523 04:09:32.564] {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":292,"completed":49,"skipped":921,"failed":0}
I0523 04:09:32.564] SSSSS
I0523 04:09:32.564] ------------------------------
I0523 04:09:32.564] [sig-apps] Daemon set [Serial] 
I0523 04:09:32.564]   should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
I0523 04:09:32.564]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.564] [BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 137 lines ...
I0523 04:09:32.588] • [SLOW TEST:33.875 seconds]
I0523 04:09:32.588] [sig-apps] Daemon set [Serial]
I0523 04:09:32.588] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0523 04:09:32.589]   should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
I0523 04:09:32.589]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.589] ------------------------------
I0523 04:09:32.589] {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":292,"completed":50,"skipped":926,"failed":0}
I0523 04:09:32.589] SSSSSSSS
I0523 04:09:32.589] ------------------------------
I0523 04:09:32.589] [sig-storage] ConfigMap 
I0523 04:09:32.590]   should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
I0523 04:09:32.590]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.590] [BeforeEach] [sig-storage] ConfigMap
... skipping 10 lines ...
I0523 04:09:32.592] I0523 03:05:56.748960      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.592] [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
I0523 04:09:32.592]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.592] I0523 03:05:56.751312      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.592] STEP: Creating configMap with name configmap-test-volume-map-b95fae53-5416-4d74-bd88-285f1a91310c
I0523 04:09:32.592] STEP: Creating a pod to test consume configMaps
I0523 04:09:32.593] May 23 03:05:56.759: INFO: Waiting up to 5m0s for pod "pod-configmaps-8c777537-0d01-4fe4-8f2f-9ac328e9aa3a" in namespace "configmap-9526" to be "Succeeded or Failed"
I0523 04:09:32.593] May 23 03:05:56.762: INFO: Pod "pod-configmaps-8c777537-0d01-4fe4-8f2f-9ac328e9aa3a": Phase="Pending", Reason="", readiness=false. Elapsed: 1.98947ms
I0523 04:09:32.593] May 23 03:05:58.765: INFO: Pod "pod-configmaps-8c777537-0d01-4fe4-8f2f-9ac328e9aa3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005012233s
I0523 04:09:32.593] STEP: Saw pod success
I0523 04:09:32.593] May 23 03:05:58.765: INFO: Pod "pod-configmaps-8c777537-0d01-4fe4-8f2f-9ac328e9aa3a" satisfied condition "Succeeded or Failed"
I0523 04:09:32.593] May 23 03:05:58.767: INFO: Trying to get logs from node kind-worker pod pod-configmaps-8c777537-0d01-4fe4-8f2f-9ac328e9aa3a container configmap-volume-test: <nil>
I0523 04:09:32.593] STEP: delete the pod
I0523 04:09:32.593] May 23 03:05:58.779: INFO: Waiting for pod pod-configmaps-8c777537-0d01-4fe4-8f2f-9ac328e9aa3a to disappear
I0523 04:09:32.594] May 23 03:05:58.780: INFO: Pod pod-configmaps-8c777537-0d01-4fe4-8f2f-9ac328e9aa3a no longer exists
I0523 04:09:32.594] [AfterEach] [sig-storage] ConfigMap
I0523 04:09:32.594]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.594] May 23 03:05:58.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.594] STEP: Destroying namespace "configmap-9526" for this suite.
I0523 04:09:32.594] •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":292,"completed":51,"skipped":934,"failed":0}
I0523 04:09:32.594] SSSSSSSSSSSSSSSSSSS
I0523 04:09:32.594] ------------------------------
I0523 04:09:32.594] [sig-api-machinery] Aggregator 
I0523 04:09:32.595]   Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
I0523 04:09:32.595]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.595] [BeforeEach] [sig-api-machinery] Aggregator
... skipping 42 lines ...
I0523 04:09:32.603] • [SLOW TEST:15.040 seconds]
I0523 04:09:32.603] [sig-api-machinery] Aggregator
I0523 04:09:32.603] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:32.604]   Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
I0523 04:09:32.604]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.604] ------------------------------
I0523 04:09:32.604] {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":292,"completed":52,"skipped":953,"failed":0}
I0523 04:09:32.604] SSSSSSSSSSSS
I0523 04:09:32.604] ------------------------------
I0523 04:09:32.604] [sig-api-machinery] Garbage collector 
I0523 04:09:32.604]   should delete RS created by deployment when not orphaning [Conformance]
I0523 04:09:32.605]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.605] [BeforeEach] [sig-api-machinery] Garbage collector
... skipping 43 lines ...
I0523 04:09:32.611] 
I0523 04:09:32.611] W0523 03:06:14.983208      17 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
I0523 04:09:32.611] [AfterEach] [sig-api-machinery] Garbage collector
I0523 04:09:32.611]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.611] May 23 03:06:14.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.612] STEP: Destroying namespace "gc-6825" for this suite.
I0523 04:09:32.612] •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":292,"completed":53,"skipped":965,"failed":0}
I0523 04:09:32.612] SSSSSS
I0523 04:09:32.612] ------------------------------
I0523 04:09:32.612] [sig-storage] Projected downwardAPI 
I0523 04:09:32.612]   should update annotations on modification [NodeConformance] [Conformance]
I0523 04:09:32.613]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.613] [BeforeEach] [sig-storage] Projected downwardAPI
... skipping 23 lines ...
I0523 04:09:32.616] • [SLOW TEST:6.678 seconds]
I0523 04:09:32.616] [sig-storage] Projected downwardAPI
I0523 04:09:32.617] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
I0523 04:09:32.617]   should update annotations on modification [NodeConformance] [Conformance]
I0523 04:09:32.617]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.617] ------------------------------
I0523 04:09:32.617] {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":292,"completed":54,"skipped":971,"failed":0}
I0523 04:09:32.617] SSSSSS
I0523 04:09:32.617] ------------------------------
I0523 04:09:32.617] [sig-storage] Projected configMap 
I0523 04:09:32.617]   should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:32.618]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.618] [BeforeEach] [sig-storage] Projected configMap
... skipping 10 lines ...
I0523 04:09:32.619] I0523 03:06:21.792051      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.620] [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:32.620]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.620] I0523 03:06:21.794429      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.620] STEP: Creating configMap with name projected-configmap-test-volume-cc9edf7b-8582-4d92-af77-30c60da3c7be
I0523 04:09:32.620] STEP: Creating a pod to test consume configMaps
I0523 04:09:32.621] May 23 03:06:21.801: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0e6d2052-646a-48bc-ac95-46dd4cf45630" in namespace "projected-7502" to be "Succeeded or Failed"
I0523 04:09:32.621] May 23 03:06:21.803: INFO: Pod "pod-projected-configmaps-0e6d2052-646a-48bc-ac95-46dd4cf45630": Phase="Pending", Reason="", readiness=false. Elapsed: 1.805336ms
I0523 04:09:32.621] May 23 03:06:23.806: INFO: Pod "pod-projected-configmaps-0e6d2052-646a-48bc-ac95-46dd4cf45630": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004677441s
I0523 04:09:32.621] STEP: Saw pod success
I0523 04:09:32.621] May 23 03:06:23.806: INFO: Pod "pod-projected-configmaps-0e6d2052-646a-48bc-ac95-46dd4cf45630" satisfied condition "Succeeded or Failed"
I0523 04:09:32.622] May 23 03:06:23.809: INFO: Trying to get logs from node kind-worker2 pod pod-projected-configmaps-0e6d2052-646a-48bc-ac95-46dd4cf45630 container projected-configmap-volume-test: <nil>
I0523 04:09:32.622] STEP: delete the pod
I0523 04:09:32.622] May 23 03:06:23.820: INFO: Waiting for pod pod-projected-configmaps-0e6d2052-646a-48bc-ac95-46dd4cf45630 to disappear
I0523 04:09:32.622] May 23 03:06:23.821: INFO: Pod pod-projected-configmaps-0e6d2052-646a-48bc-ac95-46dd4cf45630 no longer exists
I0523 04:09:32.622] [AfterEach] [sig-storage] Projected configMap
I0523 04:09:32.622]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.622] May 23 03:06:23.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.622] STEP: Destroying namespace "projected-7502" for this suite.
I0523 04:09:32.623] •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":55,"skipped":977,"failed":0}
I0523 04:09:32.623] SSSSSS
I0523 04:09:32.623] ------------------------------
I0523 04:09:32.623] [sig-cli] Kubectl client Update Demo 
I0523 04:09:32.623]   should create and stop a replication controller  [Conformance]
I0523 04:09:32.623]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.623] [BeforeEach] [sig-cli] Kubectl client
... skipping 82 lines ...
I0523 04:09:32.637] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
I0523 04:09:32.637]   Update Demo
I0523 04:09:32.637]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301
I0523 04:09:32.637]     should create and stop a replication controller  [Conformance]
I0523 04:09:32.637]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.637] ------------------------------
I0523 04:09:32.638] {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":292,"completed":56,"skipped":983,"failed":0}
I0523 04:09:32.638] SSSSSSSSSSSSSSSSS
I0523 04:09:32.638] ------------------------------
I0523 04:09:32.638] [sig-storage] Downward API volume 
I0523 04:09:32.638]   should provide container's memory request [NodeConformance] [Conformance]
I0523 04:09:32.638]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.638] [BeforeEach] [sig-storage] Downward API volume
... skipping 11 lines ...
I0523 04:09:32.641] [BeforeEach] [sig-storage] Downward API volume
I0523 04:09:32.641]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
I0523 04:09:32.641] I0523 03:06:31.007106      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.641] [It] should provide container's memory request [NodeConformance] [Conformance]
I0523 04:09:32.641]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.642] STEP: Creating a pod to test downward API volume plugin
I0523 04:09:32.642] May 23 03:06:31.012: INFO: Waiting up to 5m0s for pod "downwardapi-volume-38f2a8d2-9e55-4ace-bfae-fb6a4e30b7b8" in namespace "downward-api-9965" to be "Succeeded or Failed"
I0523 04:09:32.642] May 23 03:06:31.014: INFO: Pod "downwardapi-volume-38f2a8d2-9e55-4ace-bfae-fb6a4e30b7b8": Phase="Pending", Reason="", readiness=false. Elapsed: 1.966392ms
I0523 04:09:32.642] May 23 03:06:33.017: INFO: Pod "downwardapi-volume-38f2a8d2-9e55-4ace-bfae-fb6a4e30b7b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005041818s
I0523 04:09:32.642] STEP: Saw pod success
I0523 04:09:32.642] May 23 03:06:33.017: INFO: Pod "downwardapi-volume-38f2a8d2-9e55-4ace-bfae-fb6a4e30b7b8" satisfied condition "Succeeded or Failed"
I0523 04:09:32.643] May 23 03:06:33.019: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-38f2a8d2-9e55-4ace-bfae-fb6a4e30b7b8 container client-container: <nil>
I0523 04:09:32.643] STEP: delete the pod
I0523 04:09:32.643] May 23 03:06:33.030: INFO: Waiting for pod downwardapi-volume-38f2a8d2-9e55-4ace-bfae-fb6a4e30b7b8 to disappear
I0523 04:09:32.643] May 23 03:06:33.033: INFO: Pod downwardapi-volume-38f2a8d2-9e55-4ace-bfae-fb6a4e30b7b8 no longer exists
I0523 04:09:32.643] [AfterEach] [sig-storage] Downward API volume
I0523 04:09:32.643]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.643] May 23 03:06:33.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.643] STEP: Destroying namespace "downward-api-9965" for this suite.
I0523 04:09:32.643] •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":292,"completed":57,"skipped":1000,"failed":0}
I0523 04:09:32.643] SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:32.644] ------------------------------
I0523 04:09:32.644] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
I0523 04:09:32.644]   should deny crd creation [Conformance]
I0523 04:09:32.644]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.644] [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 28 lines ...
I0523 04:09:32.647]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.647] May 23 03:06:36.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.647] STEP: Destroying namespace "webhook-5958" for this suite.
I0523 04:09:32.647] STEP: Destroying namespace "webhook-5958-markers" for this suite.
I0523 04:09:32.647] [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0523 04:09:32.648]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
I0523 04:09:32.648] •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":292,"completed":58,"skipped":1029,"failed":0}
I0523 04:09:32.648] SSSSSSSSSSSSSSSSS
I0523 04:09:32.648] ------------------------------
I0523 04:09:32.648] [sig-apps] ReplicationController 
I0523 04:09:32.648]   should surface a failure condition on a common issue like exceeded quota [Conformance]
I0523 04:09:32.648]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.648] [BeforeEach] [sig-apps] ReplicationController
... skipping 20 lines ...
I0523 04:09:32.651] May 23 03:06:38.931: INFO: Updating replication controller "condition-test"
I0523 04:09:32.651] STEP: Checking rc "condition-test" has no failure condition set
I0523 04:09:32.651] [AfterEach] [sig-apps] ReplicationController
I0523 04:09:32.652]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.652] May 23 03:06:39.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.652] STEP: Destroying namespace "replication-controller-8024" for this suite.
I0523 04:09:32.652] •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":292,"completed":59,"skipped":1046,"failed":0}
I0523 04:09:32.652] SSSSSSS
I0523 04:09:32.652] ------------------------------
I0523 04:09:32.652] [sig-apps] ReplicationController 
I0523 04:09:32.652]   should adopt matching pods on creation [Conformance]
I0523 04:09:32.652]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.653] [BeforeEach] [sig-apps] ReplicationController
... skipping 17 lines ...
I0523 04:09:32.655] STEP: When a replication controller with a matching selector is created
I0523 04:09:32.655] STEP: Then the orphan pod is adopted
I0523 04:09:32.656] [AfterEach] [sig-apps] ReplicationController
I0523 04:09:32.656]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.656] May 23 03:06:43.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.656] STEP: Destroying namespace "replication-controller-406" for this suite.
I0523 04:09:32.656] •{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":292,"completed":60,"skipped":1053,"failed":0}
I0523 04:09:32.656] SSSSSSSSSSSSSSSSSSSS
I0523 04:09:32.656] ------------------------------
I0523 04:09:32.656] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
I0523 04:09:32.657]   should be able to deny attaching pod [Conformance]
I0523 04:09:32.657]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.657] [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 38 lines ...
I0523 04:09:32.662] • [SLOW TEST:5.934 seconds]
I0523 04:09:32.662] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0523 04:09:32.663] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:32.663]   should be able to deny attaching pod [Conformance]
I0523 04:09:32.663]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.663] ------------------------------
I0523 04:09:32.663] {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":292,"completed":61,"skipped":1073,"failed":0}
I0523 04:09:32.663] [sig-storage] Projected configMap 
I0523 04:09:32.663]   should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
I0523 04:09:32.663]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.664] [BeforeEach] [sig-storage] Projected configMap
I0523 04:09:32.664]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
I0523 04:09:32.664] STEP: Creating a kubernetes client
... skipping 8 lines ...
I0523 04:09:32.665] I0523 03:06:49.206370      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.665] [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
I0523 04:09:32.665]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.665] I0523 03:06:49.208369      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.665] STEP: Creating configMap with name projected-configmap-test-volume-0115176f-8634-47a9-8894-bb4edacf536a
I0523 04:09:32.666] STEP: Creating a pod to test consume configMaps
I0523 04:09:32.666] May 23 03:06:49.216: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a6db089c-32ce-48b7-8c26-8bd09629bc9d" in namespace "projected-5921" to be "Succeeded or Failed"
I0523 04:09:32.666] May 23 03:06:49.218: INFO: Pod "pod-projected-configmaps-a6db089c-32ce-48b7-8c26-8bd09629bc9d": Phase="Pending", Reason="", readiness=false. Elapsed: 1.989612ms
I0523 04:09:32.666] May 23 03:06:51.222: INFO: Pod "pod-projected-configmaps-a6db089c-32ce-48b7-8c26-8bd09629bc9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005339046s
I0523 04:09:32.667] May 23 03:06:53.224: INFO: Pod "pod-projected-configmaps-a6db089c-32ce-48b7-8c26-8bd09629bc9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008110309s
I0523 04:09:32.667] STEP: Saw pod success
I0523 04:09:32.667] May 23 03:06:53.224: INFO: Pod "pod-projected-configmaps-a6db089c-32ce-48b7-8c26-8bd09629bc9d" satisfied condition "Succeeded or Failed"
I0523 04:09:32.667] May 23 03:06:53.226: INFO: Trying to get logs from node kind-worker pod pod-projected-configmaps-a6db089c-32ce-48b7-8c26-8bd09629bc9d container projected-configmap-volume-test: <nil>
I0523 04:09:32.667] STEP: delete the pod
I0523 04:09:32.667] May 23 03:06:53.238: INFO: Waiting for pod pod-projected-configmaps-a6db089c-32ce-48b7-8c26-8bd09629bc9d to disappear
I0523 04:09:32.668] May 23 03:06:53.240: INFO: Pod pod-projected-configmaps-a6db089c-32ce-48b7-8c26-8bd09629bc9d no longer exists
I0523 04:09:32.668] [AfterEach] [sig-storage] Projected configMap
I0523 04:09:32.668]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.668] May 23 03:06:53.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.668] STEP: Destroying namespace "projected-5921" for this suite.
I0523 04:09:32.668] •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":292,"completed":62,"skipped":1073,"failed":0}
I0523 04:09:32.669] SSSSSSS
I0523 04:09:32.669] ------------------------------
I0523 04:09:32.669] [sig-storage] ConfigMap 
I0523 04:09:32.669]   should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
I0523 04:09:32.669]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.669] [BeforeEach] [sig-storage] ConfigMap
... skipping 10 lines ...
I0523 04:09:32.671] I0523 03:06:53.369938      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.672] [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
I0523 04:09:32.672]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.672] I0523 03:06:53.372284      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.672] STEP: Creating configMap with name configmap-test-volume-map-0fbb0e6d-effd-424b-a5dc-50a77a4dcf8b
I0523 04:09:32.672] STEP: Creating a pod to test consume configMaps
I0523 04:09:32.673] May 23 03:06:53.379: INFO: Waiting up to 5m0s for pod "pod-configmaps-af53a77a-4e3b-4c76-aae2-55b72e89a64a" in namespace "configmap-744" to be "Succeeded or Failed"
I0523 04:09:32.673] May 23 03:06:53.381: INFO: Pod "pod-configmaps-af53a77a-4e3b-4c76-aae2-55b72e89a64a": Phase="Pending", Reason="", readiness=false. Elapsed: 1.986581ms
I0523 04:09:32.673] May 23 03:06:55.384: INFO: Pod "pod-configmaps-af53a77a-4e3b-4c76-aae2-55b72e89a64a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005150435s
I0523 04:09:32.673] May 23 03:06:57.388: INFO: Pod "pod-configmaps-af53a77a-4e3b-4c76-aae2-55b72e89a64a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00862871s
I0523 04:09:32.673] STEP: Saw pod success
I0523 04:09:32.673] May 23 03:06:57.388: INFO: Pod "pod-configmaps-af53a77a-4e3b-4c76-aae2-55b72e89a64a" satisfied condition "Succeeded or Failed"
I0523 04:09:32.674] May 23 03:06:57.390: INFO: Trying to get logs from node kind-worker pod pod-configmaps-af53a77a-4e3b-4c76-aae2-55b72e89a64a container configmap-volume-test: <nil>
I0523 04:09:32.674] STEP: delete the pod
I0523 04:09:32.674] May 23 03:06:57.402: INFO: Waiting for pod pod-configmaps-af53a77a-4e3b-4c76-aae2-55b72e89a64a to disappear
I0523 04:09:32.674] May 23 03:06:57.403: INFO: Pod pod-configmaps-af53a77a-4e3b-4c76-aae2-55b72e89a64a no longer exists
I0523 04:09:32.674] [AfterEach] [sig-storage] ConfigMap
I0523 04:09:32.674]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.675] May 23 03:06:57.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.675] STEP: Destroying namespace "configmap-744" for this suite.
I0523 04:09:32.675] •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":292,"completed":63,"skipped":1080,"failed":0}
I0523 04:09:32.675] SSSSSSSSSSSS
I0523 04:09:32.675] ------------------------------
I0523 04:09:32.675] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
I0523 04:09:32.675]   should be able to convert a non homogeneous list of CRs [Conformance]
I0523 04:09:32.676]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.676] [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 37 lines ...
I0523 04:09:32.682] • [SLOW TEST:6.705 seconds]
I0523 04:09:32.682] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
I0523 04:09:32.682] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:32.682]   should be able to convert a non homogeneous list of CRs [Conformance]
I0523 04:09:32.682]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.683] ------------------------------
I0523 04:09:32.683] {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":292,"completed":64,"skipped":1092,"failed":0}
I0523 04:09:32.683] S
I0523 04:09:32.683] ------------------------------
I0523 04:09:32.683] [sig-storage] Secrets 
I0523 04:09:32.683]   should be consumable from pods in volume [NodeConformance] [Conformance]
I0523 04:09:32.683]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.683] [BeforeEach] [sig-storage] Secrets
... skipping 10 lines ...
I0523 04:09:32.685] I0523 03:07:04.243855      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.686] [It] should be consumable from pods in volume [NodeConformance] [Conformance]
I0523 04:09:32.686]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.686] I0523 03:07:04.246381      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.686] STEP: Creating secret with name secret-test-64db6be0-aea9-40cb-af63-713ce68c291f
I0523 04:09:32.686] STEP: Creating a pod to test consume secrets
I0523 04:09:32.686] May 23 03:07:04.253: INFO: Waiting up to 5m0s for pod "pod-secrets-4d7e5c79-226c-4303-b3f2-307ae124a305" in namespace "secrets-9969" to be "Succeeded or Failed"
I0523 04:09:32.686] May 23 03:07:04.255: INFO: Pod "pod-secrets-4d7e5c79-226c-4303-b3f2-307ae124a305": Phase="Pending", Reason="", readiness=false. Elapsed: 1.793591ms
I0523 04:09:32.687] May 23 03:07:06.258: INFO: Pod "pod-secrets-4d7e5c79-226c-4303-b3f2-307ae124a305": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004981664s
I0523 04:09:32.687] May 23 03:07:08.262: INFO: Pod "pod-secrets-4d7e5c79-226c-4303-b3f2-307ae124a305": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008632133s
I0523 04:09:32.687] STEP: Saw pod success
I0523 04:09:32.687] May 23 03:07:08.262: INFO: Pod "pod-secrets-4d7e5c79-226c-4303-b3f2-307ae124a305" satisfied condition "Succeeded or Failed"
I0523 04:09:32.687] May 23 03:07:08.264: INFO: Trying to get logs from node kind-worker pod pod-secrets-4d7e5c79-226c-4303-b3f2-307ae124a305 container secret-volume-test: <nil>
I0523 04:09:32.687] STEP: delete the pod
I0523 04:09:32.687] May 23 03:07:08.280: INFO: Waiting for pod pod-secrets-4d7e5c79-226c-4303-b3f2-307ae124a305 to disappear
I0523 04:09:32.688] May 23 03:07:08.283: INFO: Pod pod-secrets-4d7e5c79-226c-4303-b3f2-307ae124a305 no longer exists
I0523 04:09:32.688] [AfterEach] [sig-storage] Secrets
I0523 04:09:32.688]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.688] May 23 03:07:08.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.688] STEP: Destroying namespace "secrets-9969" for this suite.
I0523 04:09:32.688] •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":292,"completed":65,"skipped":1093,"failed":0}
I0523 04:09:32.689] 
I0523 04:09:32.689] ------------------------------
I0523 04:09:32.689] [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
I0523 04:09:32.689]   should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:32.689]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.689] [BeforeEach] [k8s.io] Kubelet
... skipping 14 lines ...
I0523 04:09:32.692] [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:32.692]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.692] [AfterEach] [k8s.io] Kubelet
I0523 04:09:32.692]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.692] May 23 03:07:12.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.692] STEP: Destroying namespace "kubelet-test-2601" for this suite.
I0523 04:09:32.693] •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":66,"skipped":1093,"failed":0}
I0523 04:09:32.693] 
I0523 04:09:32.693] ------------------------------
I0523 04:09:32.693] [sig-node] Downward API 
I0523 04:09:32.693]   should provide pod UID as env vars [NodeConformance] [Conformance]
I0523 04:09:32.693]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.693] [BeforeEach] [sig-node] Downward API
... skipping 9 lines ...
I0523 04:09:32.694] I0523 03:07:12.565543      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.695] I0523 03:07:12.565565      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.695] [It] should provide pod UID as env vars [NodeConformance] [Conformance]
I0523 04:09:32.695]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.695] I0523 03:07:12.567559      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.695] STEP: Creating a pod to test downward api env vars
I0523 04:09:32.695] May 23 03:07:12.572: INFO: Waiting up to 5m0s for pod "downward-api-646f3864-f82e-4012-98bd-76a9dd3f229a" in namespace "downward-api-5992" to be "Succeeded or Failed"
I0523 04:09:32.696] May 23 03:07:12.575: INFO: Pod "downward-api-646f3864-f82e-4012-98bd-76a9dd3f229a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.17617ms
I0523 04:09:32.696] May 23 03:07:14.579: INFO: Pod "downward-api-646f3864-f82e-4012-98bd-76a9dd3f229a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006706579s
I0523 04:09:32.696] STEP: Saw pod success
I0523 04:09:32.696] May 23 03:07:14.579: INFO: Pod "downward-api-646f3864-f82e-4012-98bd-76a9dd3f229a" satisfied condition "Succeeded or Failed"
I0523 04:09:32.696] May 23 03:07:14.581: INFO: Trying to get logs from node kind-worker2 pod downward-api-646f3864-f82e-4012-98bd-76a9dd3f229a container dapi-container: <nil>
I0523 04:09:32.696] STEP: delete the pod
I0523 04:09:32.696] May 23 03:07:14.601: INFO: Waiting for pod downward-api-646f3864-f82e-4012-98bd-76a9dd3f229a to disappear
I0523 04:09:32.696] May 23 03:07:14.603: INFO: Pod downward-api-646f3864-f82e-4012-98bd-76a9dd3f229a no longer exists
I0523 04:09:32.696] [AfterEach] [sig-node] Downward API
I0523 04:09:32.697]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.697] May 23 03:07:14.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.697] STEP: Destroying namespace "downward-api-5992" for this suite.
I0523 04:09:32.697] •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":292,"completed":67,"skipped":1093,"failed":0}
I0523 04:09:32.697] SSSSSSSSSSSSSSSSSSS
I0523 04:09:32.697] ------------------------------
I0523 04:09:32.697] [k8s.io] Variable Expansion 
I0523 04:09:32.697]   should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]
I0523 04:09:32.697]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.697] [BeforeEach] [k8s.io] Variable Expansion
I0523 04:09:32.698]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
I0523 04:09:32.698] STEP: Creating a kubernetes client
I0523 04:09:32.698] May 23 03:07:14.610: INFO: >>> kubeConfig: /tmp/kubeconfig-142995523
I0523 04:09:32.698] STEP: Building a namespace api object, basename var-expansion
I0523 04:09:32.698] I0523 03:07:14.614794      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.698] I0523 03:07:14.614822      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.698] STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-318
I0523 04:09:32.699] I0523 03:07:14.628297      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.699] STEP: Waiting for a default service account to be provisioned in namespace
I0523 04:09:32.699] I0523 03:07:14.732593      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.699] I0523 03:07:14.732619      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.699] [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]
I0523 04:09:32.699]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.699] I0523 03:07:14.734905      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.700] I0523 03:09:14.172594      17 reflector.go:514] k8s.io/kubernetes/test/e2e/node/taints.go:146: Watch close - *v1.Pod total 9 items received
I0523 04:09:32.700] May 23 03:09:14.746: INFO: Deleting pod "var-expansion-6982a536-af2a-40df-adb6-02a6d491888e" in namespace "var-expansion-318"
I0523 04:09:32.700] May 23 03:09:14.750: INFO: Wait up to 5m0s for pod "var-expansion-6982a536-af2a-40df-adb6-02a6d491888e" to be fully deleted
I0523 04:09:32.700] [AfterEach] [k8s.io] Variable Expansion
I0523 04:09:32.700]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.700] May 23 03:09:18.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.700] STEP: Destroying namespace "var-expansion-318" for this suite.
I0523 04:09:32.700] 
I0523 04:09:32.700] • [SLOW TEST:124.152 seconds]
I0523 04:09:32.701] [k8s.io] Variable Expansion
I0523 04:09:32.701] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
I0523 04:09:32.701]   should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]
I0523 04:09:32.701]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.701] ------------------------------
I0523 04:09:32.701] {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":292,"completed":68,"skipped":1112,"failed":0}
I0523 04:09:32.701] SSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:32.701] ------------------------------
I0523 04:09:32.701] [sig-api-machinery] ResourceQuota 
I0523 04:09:32.701]   should create a ResourceQuota and capture the life of a configMap. [Conformance]
I0523 04:09:32.702]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.702] [BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 26 lines ...
I0523 04:09:32.705] • [SLOW TEST:16.162 seconds]
I0523 04:09:32.705] [sig-api-machinery] ResourceQuota
I0523 04:09:32.706] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:32.706]   should create a ResourceQuota and capture the life of a configMap. [Conformance]
I0523 04:09:32.706]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.706] ------------------------------
I0523 04:09:32.706] {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":292,"completed":69,"skipped":1136,"failed":0}
I0523 04:09:32.706] SS
I0523 04:09:32.706] ------------------------------
I0523 04:09:32.706] [sig-api-machinery] ResourceQuota 
I0523 04:09:32.707]   should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
I0523 04:09:32.707]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.707] [BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 22 lines ...
I0523 04:09:32.710] • [SLOW TEST:7.138 seconds]
I0523 04:09:32.710] [sig-api-machinery] ResourceQuota
I0523 04:09:32.710] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:32.710]   should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
I0523 04:09:32.711]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.711] ------------------------------
I0523 04:09:32.711] {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":292,"completed":70,"skipped":1138,"failed":0}
I0523 04:09:32.711] SSSSSSSSSSSSSSSSS
I0523 04:09:32.711] ------------------------------
I0523 04:09:32.711] [sig-api-machinery] ResourceQuota 
I0523 04:09:32.711]   should verify ResourceQuota with best effort scope. [Conformance]
I0523 04:09:32.712]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.712] [BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 33 lines ...
I0523 04:09:32.716] • [SLOW TEST:16.187 seconds]
I0523 04:09:32.716] [sig-api-machinery] ResourceQuota
I0523 04:09:32.716] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:32.716]   should verify ResourceQuota with best effort scope. [Conformance]
I0523 04:09:32.717]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.717] ------------------------------
I0523 04:09:32.717] {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":292,"completed":71,"skipped":1155,"failed":0}
I0523 04:09:32.717] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:32.717] ------------------------------
I0523 04:09:32.717] [sig-network] Services 
I0523 04:09:32.717]   should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
I0523 04:09:32.717]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.717] [BeforeEach] [sig-network] Services
... skipping 91 lines ...
I0523 04:09:32.734] • [SLOW TEST:18.462 seconds]
I0523 04:09:32.734] [sig-network] Services
I0523 04:09:32.734] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
I0523 04:09:32.735]   should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
I0523 04:09:32.735]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.735] ------------------------------
I0523 04:09:32.735] {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":292,"completed":72,"skipped":1191,"failed":0}
I0523 04:09:32.735] SSSSSSSSSSSSSSSSS
I0523 04:09:32.735] ------------------------------
I0523 04:09:32.735] [sig-storage] Projected secret 
I0523 04:09:32.735]   optional updates should be reflected in volume [NodeConformance] [Conformance]
I0523 04:09:32.735]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.735] [BeforeEach] [sig-storage] Projected secret
... skipping 19 lines ...
I0523 04:09:32.739] STEP: Creating secret with name s-test-opt-create-d381fc9f-23fc-412c-a5e5-ff0ffc454ccf
I0523 04:09:32.739] STEP: waiting to observe update in volume
I0523 04:09:32.739] [AfterEach] [sig-storage] Projected secret
I0523 04:09:32.739]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.739] May 23 03:10:20.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.739] STEP: Destroying namespace "projected-9918" for this suite.
I0523 04:09:32.740] •{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":73,"skipped":1208,"failed":0}
I0523 04:09:32.740] SSSSSSSSSSSSSSSS
I0523 04:09:32.740] ------------------------------
I0523 04:09:32.740] [sig-storage] Subpath Atomic writer volumes 
I0523 04:09:32.740]   should support subpaths with downward pod [LinuxOnly] [Conformance]
I0523 04:09:32.740]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.740] [BeforeEach] [sig-storage] Subpath
... skipping 13 lines ...
I0523 04:09:32.742] I0523 03:10:21.041875      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.742] STEP: Setting up data
I0523 04:09:32.742] [It] should support subpaths with downward pod [LinuxOnly] [Conformance]
I0523 04:09:32.743]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.743] STEP: Creating pod pod-subpath-test-downwardapi-vxqc
I0523 04:09:32.743] STEP: Creating a pod to test atomic-volume-subpath
I0523 04:09:32.743] May 23 03:10:21.052: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-vxqc" in namespace "subpath-7010" to be "Succeeded or Failed"
I0523 04:09:32.743] May 23 03:10:21.056: INFO: Pod "pod-subpath-test-downwardapi-vxqc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.914868ms
I0523 04:09:32.743] May 23 03:10:23.058: INFO: Pod "pod-subpath-test-downwardapi-vxqc": Phase="Running", Reason="", readiness=true. Elapsed: 2.006274211s
I0523 04:09:32.743] May 23 03:10:25.061: INFO: Pod "pod-subpath-test-downwardapi-vxqc": Phase="Running", Reason="", readiness=true. Elapsed: 4.008731965s
I0523 04:09:32.743] May 23 03:10:27.063: INFO: Pod "pod-subpath-test-downwardapi-vxqc": Phase="Running", Reason="", readiness=true. Elapsed: 6.011435934s
I0523 04:09:32.744] May 23 03:10:29.066: INFO: Pod "pod-subpath-test-downwardapi-vxqc": Phase="Running", Reason="", readiness=true. Elapsed: 8.014498572s
I0523 04:09:32.744] May 23 03:10:31.069: INFO: Pod "pod-subpath-test-downwardapi-vxqc": Phase="Running", Reason="", readiness=true. Elapsed: 10.017325039s
I0523 04:09:32.744] May 23 03:10:33.072: INFO: Pod "pod-subpath-test-downwardapi-vxqc": Phase="Running", Reason="", readiness=true. Elapsed: 12.020111272s
I0523 04:09:32.744] May 23 03:10:35.074: INFO: Pod "pod-subpath-test-downwardapi-vxqc": Phase="Running", Reason="", readiness=true. Elapsed: 14.022606201s
I0523 04:09:32.744] May 23 03:10:37.077: INFO: Pod "pod-subpath-test-downwardapi-vxqc": Phase="Running", Reason="", readiness=true. Elapsed: 16.025565917s
I0523 04:09:32.744] May 23 03:10:39.080: INFO: Pod "pod-subpath-test-downwardapi-vxqc": Phase="Running", Reason="", readiness=true. Elapsed: 18.028124071s
I0523 04:09:32.745] May 23 03:10:41.083: INFO: Pod "pod-subpath-test-downwardapi-vxqc": Phase="Running", Reason="", readiness=true. Elapsed: 20.030844418s
I0523 04:09:32.745] May 23 03:10:43.086: INFO: Pod "pod-subpath-test-downwardapi-vxqc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.033784559s
I0523 04:09:32.745] STEP: Saw pod success
I0523 04:09:32.745] May 23 03:10:43.086: INFO: Pod "pod-subpath-test-downwardapi-vxqc" satisfied condition "Succeeded or Failed"
I0523 04:09:32.745] May 23 03:10:43.088: INFO: Trying to get logs from node kind-worker2 pod pod-subpath-test-downwardapi-vxqc container test-container-subpath-downwardapi-vxqc: <nil>
I0523 04:09:32.745] STEP: delete the pod
I0523 04:09:32.746] May 23 03:10:43.107: INFO: Waiting for pod pod-subpath-test-downwardapi-vxqc to disappear
I0523 04:09:32.746] May 23 03:10:43.109: INFO: Pod pod-subpath-test-downwardapi-vxqc no longer exists
I0523 04:09:32.746] STEP: Deleting pod pod-subpath-test-downwardapi-vxqc
I0523 04:09:32.746] May 23 03:10:43.109: INFO: Deleting pod "pod-subpath-test-downwardapi-vxqc" in namespace "subpath-7010"
... skipping 7 lines ...
I0523 04:09:32.747] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
I0523 04:09:32.747]   Atomic writer volumes
I0523 04:09:32.747]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
I0523 04:09:32.747]     should support subpaths with downward pod [LinuxOnly] [Conformance]
I0523 04:09:32.747]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.747] ------------------------------
I0523 04:09:32.748] {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":292,"completed":74,"skipped":1224,"failed":0}
I0523 04:09:32.748] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:32.748] ------------------------------
I0523 04:09:32.748] [sig-storage] EmptyDir volumes 
I0523 04:09:32.748]   should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:32.748]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.748] [BeforeEach] [sig-storage] EmptyDir volumes
... skipping 9 lines ...
I0523 04:09:32.750] I0523 03:10:43.240784      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.750] I0523 03:10:43.240812      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.750] [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:32.751]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.751] STEP: Creating a pod to test emptydir 0644 on node default medium
I0523 04:09:32.751] I0523 03:10:43.243123      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.751] May 23 03:10:43.248: INFO: Waiting up to 5m0s for pod "pod-41f16677-40a2-4f53-866d-aa9cacd63621" in namespace "emptydir-1441" to be "Succeeded or Failed"
I0523 04:09:32.751] May 23 03:10:43.250: INFO: Pod "pod-41f16677-40a2-4f53-866d-aa9cacd63621": Phase="Pending", Reason="", readiness=false. Elapsed: 1.889242ms
I0523 04:09:32.751] May 23 03:10:45.253: INFO: Pod "pod-41f16677-40a2-4f53-866d-aa9cacd63621": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004771314s
I0523 04:09:32.752] STEP: Saw pod success
I0523 04:09:32.752] May 23 03:10:45.253: INFO: Pod "pod-41f16677-40a2-4f53-866d-aa9cacd63621" satisfied condition "Succeeded or Failed"
I0523 04:09:32.752] May 23 03:10:45.255: INFO: Trying to get logs from node kind-worker pod pod-41f16677-40a2-4f53-866d-aa9cacd63621 container test-container: <nil>
I0523 04:09:32.752] STEP: delete the pod
I0523 04:09:32.752] May 23 03:10:45.271: INFO: Waiting for pod pod-41f16677-40a2-4f53-866d-aa9cacd63621 to disappear
I0523 04:09:32.752] May 23 03:10:45.278: INFO: Pod pod-41f16677-40a2-4f53-866d-aa9cacd63621 no longer exists
I0523 04:09:32.752] [AfterEach] [sig-storage] EmptyDir volumes
I0523 04:09:32.752]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.753] May 23 03:10:45.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.753] STEP: Destroying namespace "emptydir-1441" for this suite.
I0523 04:09:32.753] •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":75,"skipped":1254,"failed":0}
I0523 04:09:32.753] SSS
I0523 04:09:32.753] ------------------------------
I0523 04:09:32.753] [k8s.io] Variable Expansion 
I0523 04:09:32.753]   should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]
I0523 04:09:32.753]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.753] [BeforeEach] [k8s.io] Variable Expansion
... skipping 33 lines ...
I0523 04:09:32.758] • [SLOW TEST:42.857 seconds]
I0523 04:09:32.758] [k8s.io] Variable Expansion
I0523 04:09:32.758] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
I0523 04:09:32.758]   should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]
I0523 04:09:32.759]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.759] ------------------------------
I0523 04:09:32.759] {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":292,"completed":76,"skipped":1257,"failed":0}
I0523 04:09:32.759] SSSSSSS
I0523 04:09:32.759] ------------------------------
I0523 04:09:32.759] [sig-apps] ReplicaSet 
I0523 04:09:32.759]   should serve a basic image on each replica with a public image  [Conformance]
I0523 04:09:32.759]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.759] [BeforeEach] [sig-apps] ReplicaSet
... skipping 26 lines ...
I0523 04:09:32.763] • [SLOW TEST:10.153 seconds]
I0523 04:09:32.763] [sig-apps] ReplicaSet
I0523 04:09:32.763] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0523 04:09:32.764]   should serve a basic image on each replica with a public image  [Conformance]
I0523 04:09:32.764]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.764] ------------------------------
I0523 04:09:32.764] {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":292,"completed":77,"skipped":1264,"failed":0}
I0523 04:09:32.764] S
I0523 04:09:32.764] ------------------------------
I0523 04:09:32.764] [sig-storage] Downward API volume 
I0523 04:09:32.764]   should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
I0523 04:09:32.764]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.764] [BeforeEach] [sig-storage] Downward API volume
... skipping 11 lines ...
I0523 04:09:32.766] [BeforeEach] [sig-storage] Downward API volume
I0523 04:09:32.766]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
I0523 04:09:32.766] [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
I0523 04:09:32.767]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.767] I0523 03:11:38.419292      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.767] STEP: Creating a pod to test downward API volume plugin
I0523 04:09:32.767] May 23 03:11:38.424: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b299ca05-0bd7-4ec3-9e62-03e17ef57a9c" in namespace "downward-api-1402" to be "Succeeded or Failed"
I0523 04:09:32.767] May 23 03:11:38.427: INFO: Pod "downwardapi-volume-b299ca05-0bd7-4ec3-9e62-03e17ef57a9c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.429362ms
I0523 04:09:32.767] May 23 03:11:40.430: INFO: Pod "downwardapi-volume-b299ca05-0bd7-4ec3-9e62-03e17ef57a9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005988993s
I0523 04:09:32.768] STEP: Saw pod success
I0523 04:09:32.768] May 23 03:11:40.430: INFO: Pod "downwardapi-volume-b299ca05-0bd7-4ec3-9e62-03e17ef57a9c" satisfied condition "Succeeded or Failed"
I0523 04:09:32.768] May 23 03:11:40.432: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-b299ca05-0bd7-4ec3-9e62-03e17ef57a9c container client-container: <nil>
I0523 04:09:32.768] STEP: delete the pod
I0523 04:09:32.768] May 23 03:11:40.441: INFO: Waiting for pod downwardapi-volume-b299ca05-0bd7-4ec3-9e62-03e17ef57a9c to disappear
I0523 04:09:32.768] May 23 03:11:40.443: INFO: Pod downwardapi-volume-b299ca05-0bd7-4ec3-9e62-03e17ef57a9c no longer exists
I0523 04:09:32.769] [AfterEach] [sig-storage] Downward API volume
I0523 04:09:32.769]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.769] May 23 03:11:40.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.769] STEP: Destroying namespace "downward-api-1402" for this suite.
I0523 04:09:32.769] •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":292,"completed":78,"skipped":1265,"failed":0}
I0523 04:09:32.769] SSSSSSSSSSSSSSS
I0523 04:09:32.770] ------------------------------
I0523 04:09:32.770] [sig-network] Services 
I0523 04:09:32.770]   should be able to create a functioning NodePort service [Conformance]
I0523 04:09:32.770]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.770] [BeforeEach] [sig-network] Services
... skipping 46 lines ...
I0523 04:09:32.778] • [SLOW TEST:7.091 seconds]
I0523 04:09:32.778] [sig-network] Services
I0523 04:09:32.778] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
I0523 04:09:32.778]   should be able to create a functioning NodePort service [Conformance]
I0523 04:09:32.779]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.779] ------------------------------
I0523 04:09:32.779] {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":292,"completed":79,"skipped":1280,"failed":0}
I0523 04:09:32.779] SSSSS
I0523 04:09:32.779] ------------------------------
I0523 04:09:32.779] [sig-storage] EmptyDir volumes 
I0523 04:09:32.779]   should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:32.780]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.780] [BeforeEach] [sig-storage] EmptyDir volumes
... skipping 9 lines ...
I0523 04:09:32.782] I0523 03:11:47.664099      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.782] I0523 03:11:47.664127      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.782] I0523 03:11:47.666308      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.782] [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:32.782]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.782] STEP: Creating a pod to test emptydir 0777 on node default medium
I0523 04:09:32.783] May 23 03:11:47.671: INFO: Waiting up to 5m0s for pod "pod-8359e878-b96c-4ce6-906c-3b05c4cd6f88" in namespace "emptydir-7465" to be "Succeeded or Failed"
I0523 04:09:32.783] May 23 03:11:47.673: INFO: Pod "pod-8359e878-b96c-4ce6-906c-3b05c4cd6f88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.001306ms
I0523 04:09:32.783] May 23 03:11:49.676: INFO: Pod "pod-8359e878-b96c-4ce6-906c-3b05c4cd6f88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004855627s
I0523 04:09:32.783] May 23 03:11:51.678: INFO: Pod "pod-8359e878-b96c-4ce6-906c-3b05c4cd6f88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007700177s
I0523 04:09:32.783] STEP: Saw pod success
I0523 04:09:32.784] May 23 03:11:51.678: INFO: Pod "pod-8359e878-b96c-4ce6-906c-3b05c4cd6f88" satisfied condition "Succeeded or Failed"
I0523 04:09:32.784] May 23 03:11:51.680: INFO: Trying to get logs from node kind-worker2 pod pod-8359e878-b96c-4ce6-906c-3b05c4cd6f88 container test-container: <nil>
I0523 04:09:32.784] STEP: delete the pod
I0523 04:09:32.784] May 23 03:11:51.692: INFO: Waiting for pod pod-8359e878-b96c-4ce6-906c-3b05c4cd6f88 to disappear
I0523 04:09:32.784] May 23 03:11:51.693: INFO: Pod pod-8359e878-b96c-4ce6-906c-3b05c4cd6f88 no longer exists
I0523 04:09:32.784] [AfterEach] [sig-storage] EmptyDir volumes
I0523 04:09:32.785]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.785] May 23 03:11:51.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.785] STEP: Destroying namespace "emptydir-7465" for this suite.
I0523 04:09:32.785] •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":80,"skipped":1285,"failed":0}
I0523 04:09:32.785] SSSSSS
I0523 04:09:32.785] ------------------------------
I0523 04:09:32.785] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
I0523 04:09:32.785]   watch on custom resource definition objects [Conformance]
I0523 04:09:32.786]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.786] [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
... skipping 34 lines ...
I0523 04:09:32.793] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:32.793]   CustomResourceDefinition Watch
I0523 04:09:32.794]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42
I0523 04:09:32.794]     watch on custom resource definition objects [Conformance]
I0523 04:09:32.794]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.794] ------------------------------
I0523 04:09:32.794] {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":292,"completed":81,"skipped":1291,"failed":0}
I0523 04:09:32.794] SSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:32.795] ------------------------------
I0523 04:09:32.795] [k8s.io] Variable Expansion 
I0523 04:09:32.795]   should allow composing env vars into new env vars [NodeConformance] [Conformance]
I0523 04:09:32.795]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.795] [BeforeEach] [k8s.io] Variable Expansion
... skipping 9 lines ...
I0523 04:09:32.797] I0523 03:12:53.055029      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.797] I0523 03:12:53.055058      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.797] [It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
I0523 04:09:32.797]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.797] STEP: Creating a pod to test env composition
I0523 04:09:32.798] I0523 03:12:53.057374      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.798] May 23 03:12:53.061: INFO: Waiting up to 5m0s for pod "var-expansion-d0a7b0d8-edb5-4a36-84a6-fa8a003e8c06" in namespace "var-expansion-8816" to be "Succeeded or Failed"
I0523 04:09:32.798] May 23 03:12:53.063: INFO: Pod "var-expansion-d0a7b0d8-edb5-4a36-84a6-fa8a003e8c06": Phase="Pending", Reason="", readiness=false. Elapsed: 1.936752ms
I0523 04:09:32.798] May 23 03:12:55.066: INFO: Pod "var-expansion-d0a7b0d8-edb5-4a36-84a6-fa8a003e8c06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004721801s
I0523 04:09:32.798] STEP: Saw pod success
I0523 04:09:32.799] May 23 03:12:55.066: INFO: Pod "var-expansion-d0a7b0d8-edb5-4a36-84a6-fa8a003e8c06" satisfied condition "Succeeded or Failed"
I0523 04:09:32.799] May 23 03:12:55.068: INFO: Trying to get logs from node kind-worker pod var-expansion-d0a7b0d8-edb5-4a36-84a6-fa8a003e8c06 container dapi-container: <nil>
I0523 04:09:32.799] STEP: delete the pod
I0523 04:09:32.799] May 23 03:12:55.084: INFO: Waiting for pod var-expansion-d0a7b0d8-edb5-4a36-84a6-fa8a003e8c06 to disappear
I0523 04:09:32.799] May 23 03:12:55.086: INFO: Pod var-expansion-d0a7b0d8-edb5-4a36-84a6-fa8a003e8c06 no longer exists
I0523 04:09:32.799] [AfterEach] [k8s.io] Variable Expansion
I0523 04:09:32.800]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.800] May 23 03:12:55.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.800] STEP: Destroying namespace "var-expansion-8816" for this suite.
I0523 04:09:32.800] •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":292,"completed":82,"skipped":1314,"failed":0}
I0523 04:09:32.800] 
I0523 04:09:32.800] ------------------------------
I0523 04:09:32.801] [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
I0523 04:09:32.801]   should have a working scale subresource [Conformance]
I0523 04:09:32.801]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.801] [BeforeEach] [sig-apps] StatefulSet
... skipping 38 lines ...
I0523 04:09:32.806] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0523 04:09:32.806]   [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
I0523 04:09:32.807]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
I0523 04:09:32.807]     should have a working scale subresource [Conformance]
I0523 04:09:32.807]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.807] ------------------------------
I0523 04:09:32.807] {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":292,"completed":83,"skipped":1314,"failed":0}
I0523 04:09:32.807] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:32.807] ------------------------------
I0523 04:09:32.808] [k8s.io] Pods 
I0523 04:09:32.808]   should contain environment variables for services [NodeConformance] [Conformance]
I0523 04:09:32.808]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.808] [BeforeEach] [k8s.io] Pods
... skipping 10 lines ...
I0523 04:09:32.810] I0523 03:13:35.440224      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.810] [BeforeEach] [k8s.io] Pods
I0523 04:09:32.810]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179
I0523 04:09:32.810] I0523 03:13:35.443621      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.810] [It] should contain environment variables for services [NodeConformance] [Conformance]
I0523 04:09:32.811]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.811] May 23 03:13:37.471: INFO: Waiting up to 5m0s for pod "client-envvars-2e554143-57ba-4d09-9a3d-857637bf5e99" in namespace "pods-8783" to be "Succeeded or Failed"
I0523 04:09:32.811] May 23 03:13:37.479: INFO: Pod "client-envvars-2e554143-57ba-4d09-9a3d-857637bf5e99": Phase="Pending", Reason="", readiness=false. Elapsed: 8.107959ms
I0523 04:09:32.811] May 23 03:13:39.482: INFO: Pod "client-envvars-2e554143-57ba-4d09-9a3d-857637bf5e99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011241722s
I0523 04:09:32.811] STEP: Saw pod success
I0523 04:09:32.812] May 23 03:13:39.482: INFO: Pod "client-envvars-2e554143-57ba-4d09-9a3d-857637bf5e99" satisfied condition "Succeeded or Failed"
I0523 04:09:32.812] May 23 03:13:39.485: INFO: Trying to get logs from node kind-worker2 pod client-envvars-2e554143-57ba-4d09-9a3d-857637bf5e99 container env3cont: <nil>
I0523 04:09:32.812] STEP: delete the pod
I0523 04:09:32.812] May 23 03:13:39.506: INFO: Waiting for pod client-envvars-2e554143-57ba-4d09-9a3d-857637bf5e99 to disappear
I0523 04:09:32.812] May 23 03:13:39.507: INFO: Pod client-envvars-2e554143-57ba-4d09-9a3d-857637bf5e99 no longer exists
I0523 04:09:32.812] [AfterEach] [k8s.io] Pods
I0523 04:09:32.812]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.813] May 23 03:13:39.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.813] STEP: Destroying namespace "pods-8783" for this suite.
I0523 04:09:32.813] •{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":292,"completed":84,"skipped":1402,"failed":0}
I0523 04:09:32.813] SSSSSSSSSSSSSSS
I0523 04:09:32.813] ------------------------------
I0523 04:09:32.813] [sig-node] Downward API 
I0523 04:09:32.813]   should provide host IP as an env var [NodeConformance] [Conformance]
I0523 04:09:32.814]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.814] [BeforeEach] [sig-node] Downward API
... skipping 9 lines ...
I0523 04:09:32.815] I0523 03:13:39.636129      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.816] I0523 03:13:39.636154      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.816] [It] should provide host IP as an env var [NodeConformance] [Conformance]
I0523 04:09:32.816]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.816] I0523 03:13:39.638691      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.816] STEP: Creating a pod to test downward api env vars
I0523 04:09:32.817] May 23 03:13:39.643: INFO: Waiting up to 5m0s for pod "downward-api-e3a3a5b2-093c-4d86-ae33-856130e2d660" in namespace "downward-api-65" to be "Succeeded or Failed"
I0523 04:09:32.817] May 23 03:13:39.646: INFO: Pod "downward-api-e3a3a5b2-093c-4d86-ae33-856130e2d660": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142561ms
I0523 04:09:32.817] May 23 03:13:41.649: INFO: Pod "downward-api-e3a3a5b2-093c-4d86-ae33-856130e2d660": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005197592s
I0523 04:09:32.817] STEP: Saw pod success
I0523 04:09:32.817] May 23 03:13:41.649: INFO: Pod "downward-api-e3a3a5b2-093c-4d86-ae33-856130e2d660" satisfied condition "Succeeded or Failed"
I0523 04:09:32.818] May 23 03:13:41.650: INFO: Trying to get logs from node kind-worker pod downward-api-e3a3a5b2-093c-4d86-ae33-856130e2d660 container dapi-container: <nil>
I0523 04:09:32.818] STEP: delete the pod
I0523 04:09:32.818] May 23 03:13:41.663: INFO: Waiting for pod downward-api-e3a3a5b2-093c-4d86-ae33-856130e2d660 to disappear
I0523 04:09:32.818] May 23 03:13:41.665: INFO: Pod downward-api-e3a3a5b2-093c-4d86-ae33-856130e2d660 no longer exists
I0523 04:09:32.818] [AfterEach] [sig-node] Downward API
I0523 04:09:32.818]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.818] May 23 03:13:41.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.819] STEP: Destroying namespace "downward-api-65" for this suite.
I0523 04:09:32.819] •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":292,"completed":85,"skipped":1417,"failed":0}
I0523 04:09:32.819] SSSSSSSSSSS
I0523 04:09:32.819] ------------------------------
I0523 04:09:32.819] [sig-cli] Kubectl client Kubectl describe 
I0523 04:09:32.819]   should check if kubectl describe prints relevant information for rc and pods  [Conformance]
I0523 04:09:32.819]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.820] [BeforeEach] [sig-cli] Kubectl client
... skipping 29 lines ...
I0523 04:09:32.823] May 23 03:13:44.305: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
I0523 04:09:32.824] May 23 03:13:44.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-142995523 describe pod agnhost-master-vhdjd --namespace=kubectl-8374'
I0523 04:09:32.824] May 23 03:13:44.399: INFO: stderr: ""
I0523 04:09:32.825] May 23 03:13:44.399: INFO: stdout: "Name:         agnhost-master-vhdjd\nNamespace:    kubectl-8374\nPriority:     0\nNode:         kind-worker/172.17.0.4\nStart Time:   Sat, 23 May 2020 03:13:42 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  <none>\nStatus:       Running\nIP:           10.244.1.91\nIPs:\n  IP:           10.244.1.91\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://23eb57b7131ee488e8f4655d980aaa1cacae4d7338ffce7b4c9a75143ace1846\n    Image:          us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n    Image ID:       us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sat, 23 May 2020 03:13:43 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-52dht (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-52dht:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-52dht\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  <none>\nTolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From                  Message\n  ----    ------     ----  ----                  -------\n  Normal  Scheduled  2s    default-scheduler     Successfully assigned kubectl-8374/agnhost-master-vhdjd to kind-worker\n  Normal  Pulled     2s    kubelet, kind-worker  Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\" already present on machine\n  Normal  Created    1s    kubelet, kind-worker  Created container agnhost-master\n  Normal  Started    1s    kubelet, kind-worker  Started container agnhost-master\n"
I0523 04:09:32.826] May 23 03:13:44.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-142995523 describe rc agnhost-master --namespace=kubectl-8374'
I0523 04:09:32.826] May 23 03:13:44.504: INFO: stderr: ""
I0523 04:09:32.826] May 23 03:13:44.504: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-8374\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  2s    replication-controller  Created pod: agnhost-master-vhdjd\n"
I0523 04:09:32.826] May 23 03:13:44.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-142995523 describe service agnhost-master --namespace=kubectl-8374'
I0523 04:09:32.827] May 23 03:13:44.641: INFO: stderr: ""
I0523 04:09:32.827] May 23 03:13:44.641: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-8374\nLabels:            app=agnhost\n                   role=master\nAnnotations:       <none>\nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.100.27.227\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.244.1.91:6379\nSession Affinity:  None\nEvents:            <none>\n"
I0523 04:09:32.827] May 23 03:13:44.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-142995523 describe node kind-control-plane'
I0523 04:09:32.827] May 23 03:13:44.785: INFO: stderr: ""
I0523 04:09:32.832] May 23 03:13:44.785: INFO: stdout: "Name:               kind-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=kind-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 23 May 2020 02:38:55 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  kind-control-plane\n  AcquireTime:     <unset>\n  RenewTime:       Sat, 23 May 2020 03:13:44 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Sat, 23 May 2020 03:10:04 +0000   Sat, 23 May 2020 02:38:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Sat, 23 May 2020 03:10:04 +0000   Sat, 23 May 2020 02:38:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Sat, 23 May 2020 03:10:04 +0000   Sat, 23 May 2020 02:38:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Sat, 23 May 2020 03:10:04 +0000   Sat, 23 May 2020 02:39:33 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.17.0.3\n  Hostname:    kind-control-plane\nCapacity:\n  cpu:                8\n  ephemeral-storage:  253882800Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             53582972Ki\n  pods:               110\nAllocatable:\n  cpu:                8\n  ephemeral-storage:  253882800Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             53582972Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 72a9aa076af44c9892597bb10a5ba592\n  System UUID:                e2fc2a57-0d66-4485-ab45-56735907e08a\n  Boot ID:                    6f975079-e391-4bd2-b9a1-51727727244e\n  Kernel Version:             4.15.0-1044-gke\n  OS Image:                   Ubuntu Eoan Ermine (development branch)\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.3.0-27-g54658b88\n  Kubelet Version:            v1.19.0-beta.0.135+f01d848c4808bd\n  Kube-Proxy Version:         v1.19.0-beta.0.135+f01d848c4808bd\nPodCIDR:                      10.244.0.0/24\nPodCIDRs:                     10.244.0.0/24\nNon-terminated Pods:          (6 in total)\n  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---\n  kube-system                 etcd-kind-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         34m\n  kube-system                 kindnet-jt7wh                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      34m\n  kube-system                 kube-apiserver-kind-control-plane             250m (3%)     0 (0%)      0 (0%)           0 (0%)         34m\n  kube-system                 kube-controller-manager-kind-control-plane    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34m\n  kube-system                 kube-proxy-7zcq6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         34m\n  kube-system                 kube-scheduler-kind-control-plane             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34m\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests   Limits\n  --------           --------   ------\n  cpu                650m (8%)  100m (1%)\n  memory             50Mi (0%)  50Mi (0%)\n  ephemeral-storage  0 (0%)     0 (0%)\n  hugepages-1Gi      0 (0%)     0 (0%)\n  hugepages-2Mi      0 (0%)     0 (0%)\nEvents:\n  Type     Reason                    Age                From                            Message\n  ----     ------                    ----               ----                            -------\n  Normal   Starting                  35m                kubelet, kind-control-plane     Starting kubelet.\n  Normal   NodeAllocatableEnforced   35m                kubelet, kind-control-plane     Updated Node Allocatable limit across pods\n  Normal   NodeHasSufficientMemory   35m (x3 over 35m)  kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasSufficientMemory\n  Normal   NodeHasNoDiskPressure     35m (x3 over 35m)  kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID      35m (x3 over 35m)  kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasSufficientPID\n  Warning  CheckLimitsForResolvConf  35m                kubelet, kind-control-plane     Resolv.conf file '/etc/resolv.conf' contains search line consisting of more than 3 domains!\n  Warning  CheckLimitsForResolvConf  34m                kubelet, kind-control-plane     Resolv.conf file '/etc/resolv.conf' contains search line consisting of more than 3 domains!\n  Normal   Starting                  34m                kubelet, kind-control-plane     Starting kubelet.\n  Normal   NodeHasSufficientMemory   34m                kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasSufficientMemory\n  Normal   NodeHasNoDiskPressure     34m                kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID      34m                kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasSufficientPID\n  Normal   NodeAllocatableEnforced   34m                kubelet, kind-control-plane     Updated Node Allocatable limit across pods\n  Normal   Starting                  34m                kube-proxy, kind-control-plane  Starting kube-proxy.\n  Normal   NodeReady                 34m                kubelet, kind-control-plane     Node kind-control-plane status is now: NodeReady\n"
I0523 04:09:32.832] May 23 03:13:44.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-142995523 describe namespace kubectl-8374'
I0523 04:09:32.833] May 23 03:13:44.881: INFO: stderr: ""
I0523 04:09:32.833] May 23 03:13:44.881: INFO: stdout: "Name:         kubectl-8374\nLabels:       e2e-framework=kubectl\n              e2e-run=f5abc344-565c-4bc6-a6e6-d611de55dd5b\nAnnotations:  <none>\nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
I0523 04:09:32.833] [AfterEach] [sig-cli] Kubectl client
I0523 04:09:32.833]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.833] May 23 03:13:44.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.833] STEP: Destroying namespace "kubectl-8374" for this suite.
I0523 04:09:32.834] •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":292,"completed":86,"skipped":1428,"failed":0}
I0523 04:09:32.834] S
I0523 04:09:32.834] ------------------------------
I0523 04:09:32.834] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
I0523 04:09:32.834]   should be able to deny custom resource creation, update and deletion [Conformance]
I0523 04:09:32.834]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.835] [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 41 lines ...
I0523 04:09:32.841] • [SLOW TEST:6.850 seconds]
I0523 04:09:32.842] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0523 04:09:32.842] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:32.842]   should be able to deny custom resource creation, update and deletion [Conformance]
I0523 04:09:32.842]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.842] ------------------------------
I0523 04:09:32.842] {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":292,"completed":87,"skipped":1429,"failed":0}
I0523 04:09:32.843] SSSSSSSSSSSSSSSSSS
I0523 04:09:32.843] ------------------------------
I0523 04:09:32.843] [sig-storage] Subpath Atomic writer volumes 
I0523 04:09:32.843]   should support subpaths with secret pod [LinuxOnly] [Conformance]
I0523 04:09:32.843]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.843] [BeforeEach] [sig-storage] Subpath
... skipping 13 lines ...
I0523 04:09:32.845] STEP: Setting up data
I0523 04:09:32.846] I0523 03:13:51.871302      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.846] [It] should support subpaths with secret pod [LinuxOnly] [Conformance]
I0523 04:09:32.846]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.846] STEP: Creating pod pod-subpath-test-secret-xbz2
I0523 04:09:32.846] STEP: Creating a pod to test atomic-volume-subpath
I0523 04:09:32.846] May 23 03:13:51.881: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-xbz2" in namespace "subpath-7222" to be "Succeeded or Failed"
I0523 04:09:32.846] May 23 03:13:51.883: INFO: Pod "pod-subpath-test-secret-xbz2": Phase="Pending", Reason="", readiness=false. Elapsed: 1.617526ms
I0523 04:09:32.847] May 23 03:13:53.886: INFO: Pod "pod-subpath-test-secret-xbz2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004876922s
I0523 04:09:32.847] May 23 03:13:55.889: INFO: Pod "pod-subpath-test-secret-xbz2": Phase="Running", Reason="", readiness=true. Elapsed: 4.008200529s
I0523 04:09:32.847] May 23 03:13:57.893: INFO: Pod "pod-subpath-test-secret-xbz2": Phase="Running", Reason="", readiness=true. Elapsed: 6.011421066s
I0523 04:09:32.847] May 23 03:13:59.896: INFO: Pod "pod-subpath-test-secret-xbz2": Phase="Running", Reason="", readiness=true. Elapsed: 8.014352936s
I0523 04:09:32.847] May 23 03:14:01.899: INFO: Pod "pod-subpath-test-secret-xbz2": Phase="Running", Reason="", readiness=true. Elapsed: 10.017347007s
... skipping 2 lines ...
I0523 04:09:32.848] May 23 03:14:07.907: INFO: Pod "pod-subpath-test-secret-xbz2": Phase="Running", Reason="", readiness=true. Elapsed: 16.02608678s
I0523 04:09:32.848] May 23 03:14:09.910: INFO: Pod "pod-subpath-test-secret-xbz2": Phase="Running", Reason="", readiness=true. Elapsed: 18.029018513s
I0523 04:09:32.848] May 23 03:14:11.913: INFO: Pod "pod-subpath-test-secret-xbz2": Phase="Running", Reason="", readiness=true. Elapsed: 20.031723336s
I0523 04:09:32.848] May 23 03:14:13.916: INFO: Pod "pod-subpath-test-secret-xbz2": Phase="Running", Reason="", readiness=true. Elapsed: 22.034547621s
I0523 04:09:32.849] May 23 03:14:15.919: INFO: Pod "pod-subpath-test-secret-xbz2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.037982281s
I0523 04:09:32.849] STEP: Saw pod success
I0523 04:09:32.849] May 23 03:14:15.919: INFO: Pod "pod-subpath-test-secret-xbz2" satisfied condition "Succeeded or Failed"
I0523 04:09:32.849] May 23 03:14:15.922: INFO: Trying to get logs from node kind-worker2 pod pod-subpath-test-secret-xbz2 container test-container-subpath-secret-xbz2: <nil>
I0523 04:09:32.849] STEP: delete the pod
I0523 04:09:32.850] May 23 03:14:15.935: INFO: Waiting for pod pod-subpath-test-secret-xbz2 to disappear
I0523 04:09:32.850] May 23 03:14:15.937: INFO: Pod pod-subpath-test-secret-xbz2 no longer exists
I0523 04:09:32.850] STEP: Deleting pod pod-subpath-test-secret-xbz2
I0523 04:09:32.850] May 23 03:14:15.938: INFO: Deleting pod "pod-subpath-test-secret-xbz2" in namespace "subpath-7222"
... skipping 7 lines ...
I0523 04:09:32.851] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
I0523 04:09:32.851]   Atomic writer volumes
I0523 04:09:32.851]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
I0523 04:09:32.851]     should support subpaths with secret pod [LinuxOnly] [Conformance]
I0523 04:09:32.851]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.852] ------------------------------
I0523 04:09:32.852] {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":292,"completed":88,"skipped":1447,"failed":0}
I0523 04:09:32.852] SSSSSSSSSSSSSS
I0523 04:09:32.852] ------------------------------
I0523 04:09:32.852] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
I0523 04:09:32.852]   should honor timeout [Conformance]
I0523 04:09:32.852]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.853] [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
I0523 04:09:32.856] May 23 03:14:19.743: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
I0523 04:09:32.856] [It] should honor timeout [Conformance]
I0523 04:09:32.856]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.857] STEP: Setting timeout (1s) shorter than webhook latency (5s)
I0523 04:09:32.857] STEP: Registering slow webhook via the AdmissionRegistration API
I0523 04:09:32.857] STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
I0523 04:09:32.857] STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
I0523 04:09:32.857] STEP: Registering slow webhook via the AdmissionRegistration API
I0523 04:09:32.857] STEP: Having no error when timeout is longer than webhook latency
I0523 04:09:32.857] STEP: Registering slow webhook via the AdmissionRegistration API
I0523 04:09:32.858] STEP: Having no error when timeout is empty (defaulted to 10s in v1)
I0523 04:09:32.858] STEP: Registering slow webhook via the AdmissionRegistration API
I0523 04:09:32.858] [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0523 04:09:32.858]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.858] May 23 03:14:31.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.858] STEP: Destroying namespace "webhook-4100" for this suite.
I0523 04:09:32.858] STEP: Destroying namespace "webhook-4100-markers" for this suite.
... skipping 3 lines ...
I0523 04:09:32.859] • [SLOW TEST:15.912 seconds]
I0523 04:09:32.859] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0523 04:09:32.859] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:32.859]   should honor timeout [Conformance]
I0523 04:09:32.859]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.859] ------------------------------
I0523 04:09:32.860] {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":292,"completed":89,"skipped":1461,"failed":0}
I0523 04:09:32.860] SSSSSSSSSSS
I0523 04:09:32.860] ------------------------------
I0523 04:09:32.860] [sig-storage] EmptyDir volumes 
I0523 04:09:32.860]   should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:32.860]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.860] [BeforeEach] [sig-storage] EmptyDir volumes
... skipping 9 lines ...
I0523 04:09:32.862] I0523 03:14:31.982367      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.862] I0523 03:14:31.982390      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.862] [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:32.862]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.862] I0523 03:14:31.984633      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.863] STEP: Creating a pod to test emptydir 0666 on tmpfs
I0523 04:09:32.863] May 23 03:14:31.989: INFO: Waiting up to 5m0s for pod "pod-2f195ce8-f727-495a-b625-5e3eea41f711" in namespace "emptydir-3088" to be "Succeeded or Failed"
I0523 04:09:32.863] May 23 03:14:31.994: INFO: Pod "pod-2f195ce8-f727-495a-b625-5e3eea41f711": Phase="Pending", Reason="", readiness=false. Elapsed: 4.499299ms
I0523 04:09:32.863] May 23 03:14:33.996: INFO: Pod "pod-2f195ce8-f727-495a-b625-5e3eea41f711": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006940695s
I0523 04:09:32.863] May 23 03:14:35.999: INFO: Pod "pod-2f195ce8-f727-495a-b625-5e3eea41f711": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009999081s
I0523 04:09:32.864] STEP: Saw pod success
I0523 04:09:32.864] May 23 03:14:35.999: INFO: Pod "pod-2f195ce8-f727-495a-b625-5e3eea41f711" satisfied condition "Succeeded or Failed"
I0523 04:09:32.864] May 23 03:14:36.001: INFO: Trying to get logs from node kind-worker pod pod-2f195ce8-f727-495a-b625-5e3eea41f711 container test-container: <nil>
I0523 04:09:32.864] STEP: delete the pod
I0523 04:09:32.864] May 23 03:14:36.014: INFO: Waiting for pod pod-2f195ce8-f727-495a-b625-5e3eea41f711 to disappear
I0523 04:09:32.864] May 23 03:14:36.015: INFO: Pod pod-2f195ce8-f727-495a-b625-5e3eea41f711 no longer exists
I0523 04:09:32.864] [AfterEach] [sig-storage] EmptyDir volumes
I0523 04:09:32.865]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.865] May 23 03:14:36.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.865] STEP: Destroying namespace "emptydir-3088" for this suite.
I0523 04:09:32.865] •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":90,"skipped":1472,"failed":0}
I0523 04:09:32.865] SSSSSSSSS
I0523 04:09:32.865] ------------------------------
I0523 04:09:32.865] [k8s.io] Kubelet when scheduling a busybox command in a pod 
I0523 04:09:32.865]   should print the output to logs [NodeConformance] [Conformance]
I0523 04:09:32.866]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.866] [BeforeEach] [k8s.io] Kubelet
... skipping 14 lines ...
I0523 04:09:32.868] [It] should print the output to logs [NodeConformance] [Conformance]
I0523 04:09:32.869]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.869] [AfterEach] [k8s.io] Kubelet
I0523 04:09:32.869]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.869] May 23 03:14:40.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.869] STEP: Destroying namespace "kubelet-test-3335" for this suite.
I0523 04:09:32.869] •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":292,"completed":91,"skipped":1481,"failed":0}
I0523 04:09:32.870] SSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:32.870] ------------------------------
I0523 04:09:32.870] [sig-storage] Downward API volume 
I0523 04:09:32.870]   should provide container's cpu limit [NodeConformance] [Conformance]
I0523 04:09:32.870]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.870] [BeforeEach] [sig-storage] Downward API volume
... skipping 11 lines ...
I0523 04:09:32.872] [BeforeEach] [sig-storage] Downward API volume
I0523 04:09:32.872]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
I0523 04:09:32.873] [It] should provide container's cpu limit [NodeConformance] [Conformance]
I0523 04:09:32.873]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.873] I0523 03:14:40.295213      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.873] STEP: Creating a pod to test downward API volume plugin
I0523 04:09:32.873] May 23 03:14:40.301: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6f1327a3-86f1-46c7-8069-99043baa2d07" in namespace "downward-api-393" to be "Succeeded or Failed"
I0523 04:09:32.874] May 23 03:14:40.303: INFO: Pod "downwardapi-volume-6f1327a3-86f1-46c7-8069-99043baa2d07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165478ms
I0523 04:09:32.874] May 23 03:14:42.306: INFO: Pod "downwardapi-volume-6f1327a3-86f1-46c7-8069-99043baa2d07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005172867s
I0523 04:09:32.874] STEP: Saw pod success
I0523 04:09:32.874] May 23 03:14:42.306: INFO: Pod "downwardapi-volume-6f1327a3-86f1-46c7-8069-99043baa2d07" satisfied condition "Succeeded or Failed"
I0523 04:09:32.874] May 23 03:14:42.309: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-6f1327a3-86f1-46c7-8069-99043baa2d07 container client-container: <nil>
I0523 04:09:32.874] STEP: delete the pod
I0523 04:09:32.874] May 23 03:14:42.319: INFO: Waiting for pod downwardapi-volume-6f1327a3-86f1-46c7-8069-99043baa2d07 to disappear
I0523 04:09:32.874] May 23 03:14:42.321: INFO: Pod downwardapi-volume-6f1327a3-86f1-46c7-8069-99043baa2d07 no longer exists
I0523 04:09:32.874] [AfterEach] [sig-storage] Downward API volume
I0523 04:09:32.875]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.875] May 23 03:14:42.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.875] STEP: Destroying namespace "downward-api-393" for this suite.
I0523 04:09:32.875] •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":292,"completed":92,"skipped":1506,"failed":0}
I0523 04:09:32.875] SSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:32.875] ------------------------------
I0523 04:09:32.875] [sig-storage] Projected downwardAPI 
I0523 04:09:32.875]   should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
I0523 04:09:32.875]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.876] [BeforeEach] [sig-storage] Projected downwardAPI
... skipping 11 lines ...
I0523 04:09:32.878] [BeforeEach] [sig-storage] Projected downwardAPI
I0523 04:09:32.878]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
I0523 04:09:32.878] [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
I0523 04:09:32.878]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.878] I0523 03:14:42.450209      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.879] STEP: Creating a pod to test downward API volume plugin
I0523 04:09:32.879] May 23 03:14:42.455: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6f0b648c-acae-4b1e-bf4e-42bdd4407f7c" in namespace "projected-7302" to be "Succeeded or Failed"
I0523 04:09:32.879] May 23 03:14:42.457: INFO: Pod "downwardapi-volume-6f0b648c-acae-4b1e-bf4e-42bdd4407f7c": Phase="Pending", Reason="", readiness=false. Elapsed: 1.960652ms
I0523 04:09:32.879] May 23 03:14:44.460: INFO: Pod "downwardapi-volume-6f0b648c-acae-4b1e-bf4e-42bdd4407f7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004879714s
I0523 04:09:32.879] STEP: Saw pod success
I0523 04:09:32.880] May 23 03:14:44.460: INFO: Pod "downwardapi-volume-6f0b648c-acae-4b1e-bf4e-42bdd4407f7c" satisfied condition "Succeeded or Failed"
I0523 04:09:32.880] May 23 03:14:44.462: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-6f0b648c-acae-4b1e-bf4e-42bdd4407f7c container client-container: <nil>
I0523 04:09:32.880] STEP: delete the pod
I0523 04:09:32.880] May 23 03:14:44.471: INFO: Waiting for pod downwardapi-volume-6f0b648c-acae-4b1e-bf4e-42bdd4407f7c to disappear
I0523 04:09:32.880] May 23 03:14:44.474: INFO: Pod downwardapi-volume-6f0b648c-acae-4b1e-bf4e-42bdd4407f7c no longer exists
I0523 04:09:32.880] [AfterEach] [sig-storage] Projected downwardAPI
I0523 04:09:32.881]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.881] May 23 03:14:44.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.881] STEP: Destroying namespace "projected-7302" for this suite.
I0523 04:09:32.881] •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":292,"completed":93,"skipped":1529,"failed":0}
I0523 04:09:32.881] SSSSSSSSSSSS
I0523 04:09:32.881] ------------------------------
I0523 04:09:32.881] [sig-api-machinery] Namespaces [Serial] 
I0523 04:09:32.882]   should ensure that all services are removed when a namespace is deleted [Conformance]
I0523 04:09:32.882]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.882] [BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 40 lines ...
I0523 04:09:32.888] • [SLOW TEST:6.407 seconds]
I0523 04:09:32.889] [sig-api-machinery] Namespaces [Serial]
I0523 04:09:32.889] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:32.889]   should ensure that all services are removed when a namespace is deleted [Conformance]
I0523 04:09:32.889]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.889] ------------------------------
I0523 04:09:32.890] {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":292,"completed":94,"skipped":1541,"failed":0}
I0523 04:09:32.890] SSSSSSS
I0523 04:09:32.890] ------------------------------
I0523 04:09:32.890] [sig-apps] Job 
I0523 04:09:32.890]   should adopt matching orphans and release non-matching pods [Conformance]
I0523 04:09:32.890]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.890] [BeforeEach] [sig-apps] Job
... skipping 35 lines ...
I0523 04:09:32.896] • [SLOW TEST:9.166 seconds]
I0523 04:09:32.896] [sig-apps] Job
I0523 04:09:32.896] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0523 04:09:32.896]   should adopt matching orphans and release non-matching pods [Conformance]
I0523 04:09:32.896]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.896] ------------------------------
I0523 04:09:32.897] {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":292,"completed":95,"skipped":1548,"failed":0}
I0523 04:09:32.897] [sig-network] DNS 
I0523 04:09:32.897]   should provide DNS for ExternalName services [Conformance]
I0523 04:09:32.897]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.897] [BeforeEach] [sig-network] DNS
I0523 04:09:32.897]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
I0523 04:09:32.897] STEP: Creating a kubernetes client
... skipping 31 lines ...
I0523 04:09:32.901] STEP: retrieving the pod
I0523 04:09:32.901] STEP: looking for the results for each expected name from probers
I0523 04:09:32.902] May 23 03:15:08.237: INFO: File wheezy_udp@dns-test-service-3.dns-3122.svc.cluster.local from pod  dns-3122/dns-test-09163522-46cc-4574-8ed2-7098033d59fb contains 'foo.example.com.
I0523 04:09:32.902] ' instead of 'bar.example.com.'
I0523 04:09:32.902] May 23 03:15:08.239: INFO: File jessie_udp@dns-test-service-3.dns-3122.svc.cluster.local from pod  dns-3122/dns-test-09163522-46cc-4574-8ed2-7098033d59fb contains 'foo.example.com.
I0523 04:09:32.902] ' instead of 'bar.example.com.'
I0523 04:09:32.902] May 23 03:15:08.239: INFO: Lookups using dns-3122/dns-test-09163522-46cc-4574-8ed2-7098033d59fb failed for: [wheezy_udp@dns-test-service-3.dns-3122.svc.cluster.local jessie_udp@dns-test-service-3.dns-3122.svc.cluster.local]
I0523 04:09:32.902] 
I0523 04:09:32.903] May 23 03:15:13.243: INFO: File wheezy_udp@dns-test-service-3.dns-3122.svc.cluster.local from pod  dns-3122/dns-test-09163522-46cc-4574-8ed2-7098033d59fb contains 'foo.example.com.
I0523 04:09:32.903] ' instead of 'bar.example.com.'
I0523 04:09:32.903] May 23 03:15:13.249: INFO: File jessie_udp@dns-test-service-3.dns-3122.svc.cluster.local from pod  dns-3122/dns-test-09163522-46cc-4574-8ed2-7098033d59fb contains 'foo.example.com.
I0523 04:09:32.903] ' instead of 'bar.example.com.'
I0523 04:09:32.903] May 23 03:15:13.249: INFO: Lookups using dns-3122/dns-test-09163522-46cc-4574-8ed2-7098033d59fb failed for: [wheezy_udp@dns-test-service-3.dns-3122.svc.cluster.local jessie_udp@dns-test-service-3.dns-3122.svc.cluster.local]
I0523 04:09:32.903] 
I0523 04:09:32.904] May 23 03:15:18.243: INFO: File wheezy_udp@dns-test-service-3.dns-3122.svc.cluster.local from pod  dns-3122/dns-test-09163522-46cc-4574-8ed2-7098033d59fb contains 'foo.example.com.
I0523 04:09:32.904] ' instead of 'bar.example.com.'
I0523 04:09:32.904] May 23 03:15:18.245: INFO: File jessie_udp@dns-test-service-3.dns-3122.svc.cluster.local from pod  dns-3122/dns-test-09163522-46cc-4574-8ed2-7098033d59fb contains 'foo.example.com.
I0523 04:09:32.904] ' instead of 'bar.example.com.'
I0523 04:09:32.904] May 23 03:15:18.245: INFO: Lookups using dns-3122/dns-test-09163522-46cc-4574-8ed2-7098033d59fb failed for: [wheezy_udp@dns-test-service-3.dns-3122.svc.cluster.local jessie_udp@dns-test-service-3.dns-3122.svc.cluster.local]
I0523 04:09:32.904] 
I0523 04:09:32.905] May 23 03:15:23.242: INFO: File wheezy_udp@dns-test-service-3.dns-3122.svc.cluster.local from pod  dns-3122/dns-test-09163522-46cc-4574-8ed2-7098033d59fb contains 'foo.example.com.
I0523 04:09:32.905] ' instead of 'bar.example.com.'
I0523 04:09:32.905] May 23 03:15:23.245: INFO: File jessie_udp@dns-test-service-3.dns-3122.svc.cluster.local from pod  dns-3122/dns-test-09163522-46cc-4574-8ed2-7098033d59fb contains 'foo.example.com.
I0523 04:09:32.905] ' instead of 'bar.example.com.'
I0523 04:09:32.905] May 23 03:15:23.245: INFO: Lookups using dns-3122/dns-test-09163522-46cc-4574-8ed2-7098033d59fb failed for: [wheezy_udp@dns-test-service-3.dns-3122.svc.cluster.local jessie_udp@dns-test-service-3.dns-3122.svc.cluster.local]
I0523 04:09:32.905] 
I0523 04:09:32.906] May 23 03:15:28.243: INFO: File wheezy_udp@dns-test-service-3.dns-3122.svc.cluster.local from pod  dns-3122/dns-test-09163522-46cc-4574-8ed2-7098033d59fb contains 'foo.example.com.
I0523 04:09:32.906] ' instead of 'bar.example.com.'
I0523 04:09:32.906] May 23 03:15:28.246: INFO: File jessie_udp@dns-test-service-3.dns-3122.svc.cluster.local from pod  dns-3122/dns-test-09163522-46cc-4574-8ed2-7098033d59fb contains 'foo.example.com.
I0523 04:09:32.906] ' instead of 'bar.example.com.'
I0523 04:09:32.906] May 23 03:15:28.246: INFO: Lookups using dns-3122/dns-test-09163522-46cc-4574-8ed2-7098033d59fb failed for: [wheezy_udp@dns-test-service-3.dns-3122.svc.cluster.local jessie_udp@dns-test-service-3.dns-3122.svc.cluster.local]
I0523 04:09:32.906] 
I0523 04:09:32.906] May 23 03:15:33.247: INFO: DNS probes using dns-test-09163522-46cc-4574-8ed2-7098033d59fb succeeded
I0523 04:09:32.906] 
I0523 04:09:32.906] STEP: deleting the pod
I0523 04:09:32.907] STEP: changing the service to type=ClusterIP
I0523 04:09:32.907] STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3122.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3122.svc.cluster.local; sleep 1; done
... skipping 16 lines ...
I0523 04:09:32.909] • [SLOW TEST:37.300 seconds]
I0523 04:09:32.909] [sig-network] DNS
I0523 04:09:32.909] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
I0523 04:09:32.909]   should provide DNS for ExternalName services [Conformance]
I0523 04:09:32.909]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.909] ------------------------------
I0523 04:09:32.909] {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":292,"completed":96,"skipped":1548,"failed":0}
I0523 04:09:32.910] [sig-storage] EmptyDir volumes 
I0523 04:09:32.910]   should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:32.910]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.910] [BeforeEach] [sig-storage] EmptyDir volumes
I0523 04:09:32.910]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
I0523 04:09:32.910] STEP: Creating a kubernetes client
... skipping 7 lines ...
I0523 04:09:32.912] I0523 03:15:37.485343      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.912] I0523 03:15:37.485425      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.912] [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:32.912]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.912] I0523 03:15:37.488554      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.912] STEP: Creating a pod to test emptydir 0666 on node default medium
I0523 04:09:32.913] May 23 03:15:37.498: INFO: Waiting up to 5m0s for pod "pod-45e79f91-02b3-4d87-9248-1a7a10c37a2b" in namespace "emptydir-357" to be "Succeeded or Failed"
I0523 04:09:32.913] May 23 03:15:37.501: INFO: Pod "pod-45e79f91-02b3-4d87-9248-1a7a10c37a2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.200968ms
I0523 04:09:32.913] May 23 03:15:39.503: INFO: Pod "pod-45e79f91-02b3-4d87-9248-1a7a10c37a2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005035644s
I0523 04:09:32.913] STEP: Saw pod success
I0523 04:09:32.913] May 23 03:15:39.504: INFO: Pod "pod-45e79f91-02b3-4d87-9248-1a7a10c37a2b" satisfied condition "Succeeded or Failed"
I0523 04:09:32.913] May 23 03:15:39.506: INFO: Trying to get logs from node kind-worker pod pod-45e79f91-02b3-4d87-9248-1a7a10c37a2b container test-container: <nil>
I0523 04:09:32.914] STEP: delete the pod
I0523 04:09:32.914] May 23 03:15:39.518: INFO: Waiting for pod pod-45e79f91-02b3-4d87-9248-1a7a10c37a2b to disappear
I0523 04:09:32.914] May 23 03:15:39.520: INFO: Pod pod-45e79f91-02b3-4d87-9248-1a7a10c37a2b no longer exists
I0523 04:09:32.914] [AfterEach] [sig-storage] EmptyDir volumes
I0523 04:09:32.914]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.914] May 23 03:15:39.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.914] STEP: Destroying namespace "emptydir-357" for this suite.
I0523 04:09:32.915] •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":97,"skipped":1548,"failed":0}
I0523 04:09:32.915] SS
I0523 04:09:32.915] ------------------------------
I0523 04:09:32.915] [sig-storage] Projected secret 
I0523 04:09:32.915]   should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:32.915]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.915] [BeforeEach] [sig-storage] Projected secret
... skipping 10 lines ...
I0523 04:09:32.917] I0523 03:15:39.650906      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.917] [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:32.917]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.918] I0523 03:15:39.653469      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.918] STEP: Creating projection with secret that has name projected-secret-test-09989c08-57c1-478f-97fe-b5084343c7e8
I0523 04:09:32.918] STEP: Creating a pod to test consume secrets
I0523 04:09:32.918] May 23 03:15:39.660: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-694c5be0-542b-446c-869c-9a39309cccd8" in namespace "projected-3467" to be "Succeeded or Failed"
I0523 04:09:32.918] May 23 03:15:39.662: INFO: Pod "pod-projected-secrets-694c5be0-542b-446c-869c-9a39309cccd8": Phase="Pending", Reason="", readiness=false. Elapsed: 1.934902ms
I0523 04:09:32.918] May 23 03:15:41.665: INFO: Pod "pod-projected-secrets-694c5be0-542b-446c-869c-9a39309cccd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004805832s
I0523 04:09:32.918] STEP: Saw pod success
I0523 04:09:32.919] May 23 03:15:41.665: INFO: Pod "pod-projected-secrets-694c5be0-542b-446c-869c-9a39309cccd8" satisfied condition "Succeeded or Failed"
I0523 04:09:32.919] May 23 03:15:41.667: INFO: Trying to get logs from node kind-worker2 pod pod-projected-secrets-694c5be0-542b-446c-869c-9a39309cccd8 container projected-secret-volume-test: <nil>
I0523 04:09:32.919] STEP: delete the pod
I0523 04:09:32.919] May 23 03:15:41.679: INFO: Waiting for pod pod-projected-secrets-694c5be0-542b-446c-869c-9a39309cccd8 to disappear
I0523 04:09:32.919] May 23 03:15:41.681: INFO: Pod pod-projected-secrets-694c5be0-542b-446c-869c-9a39309cccd8 no longer exists
I0523 04:09:32.919] [AfterEach] [sig-storage] Projected secret
I0523 04:09:32.919]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.919] May 23 03:15:41.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.920] STEP: Destroying namespace "projected-3467" for this suite.
I0523 04:09:32.920] •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":98,"skipped":1550,"failed":0}
I0523 04:09:32.920] SSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:32.920] ------------------------------
I0523 04:09:32.920] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
I0523 04:09:32.920]   should mutate custom resource [Conformance]
I0523 04:09:32.920]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.920] [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 36 lines ...
I0523 04:09:32.927] • [SLOW TEST:6.836 seconds]
I0523 04:09:32.927] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0523 04:09:32.927] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:32.927]   should mutate custom resource [Conformance]
I0523 04:09:32.927]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.927] ------------------------------
I0523 04:09:32.928] {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":292,"completed":99,"skipped":1572,"failed":0}
I0523 04:09:32.928] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:32.928] ------------------------------
I0523 04:09:32.928] [sig-cli] Kubectl client Kubectl cluster-info 
I0523 04:09:32.928]   should check if Kubernetes master services is included in cluster-info  [Conformance]
I0523 04:09:32.928]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.929] [BeforeEach] [sig-cli] Kubectl client
... skipping 18 lines ...
I0523 04:09:32.932] May 23 03:15:48.761: INFO: stderr: ""
I0523 04:09:32.932] May 23 03:15:48.761: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://10.96.0.1:443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://10.96.0.1:443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
I0523 04:09:32.933] [AfterEach] [sig-cli] Kubectl client
I0523 04:09:32.933]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.933] May 23 03:15:48.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.933] STEP: Destroying namespace "kubectl-4395" for this suite.
I0523 04:09:32.933] •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":292,"completed":100,"skipped":1608,"failed":0}
I0523 04:09:32.933] SSSSSSSSSSSSSSSSS
I0523 04:09:32.933] ------------------------------
I0523 04:09:32.934] [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
I0523 04:09:32.934]   should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
I0523 04:09:32.934]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.934] [BeforeEach] [k8s.io] [sig-node] Pods Extended
... skipping 17 lines ...
I0523 04:09:32.937] STEP: submitting the pod to kubernetes
I0523 04:09:32.937] STEP: verifying QOS class is set on the pod
I0523 04:09:32.937] [AfterEach] [k8s.io] [sig-node] Pods Extended
I0523 04:09:32.938]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.938] May 23 03:15:48.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.938] STEP: Destroying namespace "pods-6222" for this suite.
I0523 04:09:32.938] •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":292,"completed":101,"skipped":1625,"failed":0}
I0523 04:09:32.938] SSSSSSSSSSSSSSSSSSS
I0523 04:09:32.938] ------------------------------
I0523 04:09:32.938] [sig-storage] Projected configMap 
I0523 04:09:32.938]   updates should be reflected in volume [NodeConformance] [Conformance]
I0523 04:09:32.939]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.939] [BeforeEach] [sig-storage] Projected configMap
... skipping 23 lines ...
I0523 04:09:32.942] • [SLOW TEST:94.432 seconds]
I0523 04:09:32.943] [sig-storage] Projected configMap
I0523 04:09:32.943] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
I0523 04:09:32.943]   updates should be reflected in volume [NodeConformance] [Conformance]
I0523 04:09:32.943]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.943] ------------------------------
I0523 04:09:32.943] {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":102,"skipped":1644,"failed":0}
I0523 04:09:32.943] SSSS
I0523 04:09:32.943] ------------------------------
I0523 04:09:32.944] [sig-api-machinery] Watchers 
I0523 04:09:32.944]   should be able to restart watching from the last resource version observed by the previous watch [Conformance]
I0523 04:09:32.944]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.944] [BeforeEach] [sig-api-machinery] Watchers
... skipping 24 lines ...
I0523 04:09:32.948] May 23 03:17:23.488: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7303 /api/v1/namespaces/watch-7303/configmaps/e2e-watch-test-watch-closed 2199d793-e563-4d31-812c-b725d2a0af38 11595 0 2020-05-23 03:17:23 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-05-23 03:17:23 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
I0523 04:09:32.949] May 23 03:17:23.488: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7303 /api/v1/namespaces/watch-7303/configmaps/e2e-watch-test-watch-closed 2199d793-e563-4d31-812c-b725d2a0af38 11596 0 2020-05-23 03:17:23 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-05-23 03:17:23 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
I0523 04:09:32.949] [AfterEach] [sig-api-machinery] Watchers
I0523 04:09:32.949]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.950] May 23 03:17:23.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.950] STEP: Destroying namespace "watch-7303" for this suite.
I0523 04:09:32.950] •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":292,"completed":103,"skipped":1648,"failed":0}
I0523 04:09:32.950] SSSSSSSS
I0523 04:09:32.950] ------------------------------
I0523 04:09:32.950] [sig-network] Networking Granular Checks: Pods 
I0523 04:09:32.950]   should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:32.951]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.951] [BeforeEach] [sig-network] Networking
... skipping 45 lines ...
I0523 04:09:32.957] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
I0523 04:09:32.957]   Granular Checks: Pods
I0523 04:09:32.957]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
I0523 04:09:32.957]     should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:32.957]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.957] ------------------------------
I0523 04:09:32.957] {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":104,"skipped":1656,"failed":0}
I0523 04:09:32.957] SSSSS
I0523 04:09:32.957] ------------------------------
I0523 04:09:32.958] [sig-api-machinery] Garbage collector 
I0523 04:09:32.958]   should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
I0523 04:09:32.958]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.958] [BeforeEach] [sig-api-machinery] Garbage collector
... skipping 47 lines ...
I0523 04:09:32.964] • [SLOW TEST:6.161 seconds]
I0523 04:09:32.964] [sig-api-machinery] Garbage collector
I0523 04:09:32.964] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:32.964]   should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
I0523 04:09:32.964]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.964] ------------------------------
I0523 04:09:32.965] {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":292,"completed":105,"skipped":1661,"failed":0}
I0523 04:09:32.965] SSSSSSSSSSSSSSSSSS
I0523 04:09:32.965] ------------------------------
I0523 04:09:32.965] [sig-storage] Downward API volume 
I0523 04:09:32.965]   should provide container's cpu request [NodeConformance] [Conformance]
I0523 04:09:32.965]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.965] [BeforeEach] [sig-storage] Downward API volume
... skipping 11 lines ...
I0523 04:09:32.967] [BeforeEach] [sig-storage] Downward API volume
I0523 04:09:32.968]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
I0523 04:09:32.968] I0523 03:17:56.171514      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.968] [It] should provide container's cpu request [NodeConformance] [Conformance]
I0523 04:09:32.968]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.968] STEP: Creating a pod to test downward API volume plugin
I0523 04:09:32.969] May 23 03:17:56.185: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3d110acc-da3a-4792-809c-3c64f4b41430" in namespace "downward-api-3522" to be "Succeeded or Failed"
I0523 04:09:32.969] May 23 03:17:56.209: INFO: Pod "downwardapi-volume-3d110acc-da3a-4792-809c-3c64f4b41430": Phase="Pending", Reason="", readiness=false. Elapsed: 24.026774ms
I0523 04:09:32.969] May 23 03:17:58.212: INFO: Pod "downwardapi-volume-3d110acc-da3a-4792-809c-3c64f4b41430": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.027174875s
I0523 04:09:32.969] STEP: Saw pod success
I0523 04:09:32.969] May 23 03:17:58.212: INFO: Pod "downwardapi-volume-3d110acc-da3a-4792-809c-3c64f4b41430" satisfied condition "Succeeded or Failed"
I0523 04:09:32.970] May 23 03:17:58.214: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-3d110acc-da3a-4792-809c-3c64f4b41430 container client-container: <nil>
I0523 04:09:32.970] STEP: delete the pod
I0523 04:09:32.970] May 23 03:17:58.235: INFO: Waiting for pod downwardapi-volume-3d110acc-da3a-4792-809c-3c64f4b41430 to disappear
I0523 04:09:32.970] May 23 03:17:58.237: INFO: Pod downwardapi-volume-3d110acc-da3a-4792-809c-3c64f4b41430 no longer exists
I0523 04:09:32.970] [AfterEach] [sig-storage] Downward API volume
I0523 04:09:32.970]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.970] May 23 03:17:58.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.970] STEP: Destroying namespace "downward-api-3522" for this suite.
I0523 04:09:32.970] •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":292,"completed":106,"skipped":1679,"failed":0}
I0523 04:09:32.970] SSSSSSSSSSSSSSS
I0523 04:09:32.971] ------------------------------
I0523 04:09:32.971] [sig-storage] Secrets 
I0523 04:09:32.971]   should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:32.971]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.971] [BeforeEach] [sig-storage] Secrets
... skipping 10 lines ...
I0523 04:09:32.972] I0523 03:17:58.368699      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.973] [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:32.973]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.973] I0523 03:17:58.371506      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:32.973] STEP: Creating secret with name secret-test-a2183f7b-4de2-4abb-be6a-cd7eea173278
I0523 04:09:32.973] STEP: Creating a pod to test consume secrets
I0523 04:09:32.973] May 23 03:17:58.382: INFO: Waiting up to 5m0s for pod "pod-secrets-d2e7cda3-ea40-4718-b833-8fd1aa802a2d" in namespace "secrets-3594" to be "Succeeded or Failed"
I0523 04:09:32.973] May 23 03:17:58.386: INFO: Pod "pod-secrets-d2e7cda3-ea40-4718-b833-8fd1aa802a2d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.570625ms
I0523 04:09:32.974] May 23 03:18:00.390: INFO: Pod "pod-secrets-d2e7cda3-ea40-4718-b833-8fd1aa802a2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008299264s
I0523 04:09:32.974] STEP: Saw pod success
I0523 04:09:32.974] May 23 03:18:00.390: INFO: Pod "pod-secrets-d2e7cda3-ea40-4718-b833-8fd1aa802a2d" satisfied condition "Succeeded or Failed"
I0523 04:09:32.974] May 23 03:18:00.392: INFO: Trying to get logs from node kind-worker pod pod-secrets-d2e7cda3-ea40-4718-b833-8fd1aa802a2d container secret-volume-test: <nil>
I0523 04:09:32.974] STEP: delete the pod
I0523 04:09:32.974] May 23 03:18:00.403: INFO: Waiting for pod pod-secrets-d2e7cda3-ea40-4718-b833-8fd1aa802a2d to disappear
I0523 04:09:32.974] May 23 03:18:00.406: INFO: Pod pod-secrets-d2e7cda3-ea40-4718-b833-8fd1aa802a2d no longer exists
I0523 04:09:32.975] [AfterEach] [sig-storage] Secrets
I0523 04:09:32.975]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:32.975] May 23 03:18:00.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:32.975] STEP: Destroying namespace "secrets-3594" for this suite.
I0523 04:09:32.975] •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":107,"skipped":1694,"failed":0}
I0523 04:09:32.975] 
I0523 04:09:32.975] ------------------------------
I0523 04:09:32.975] [sig-cli] Kubectl client Kubectl logs 
I0523 04:09:32.976]   should be able to retrieve and filter logs  [Conformance]
I0523 04:09:32.976]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:32.976] [BeforeEach] [sig-cli] Kubectl client
... skipping 32 lines ...
I0523 04:09:32.982] May 23 03:18:02.762: INFO: stdout: "I0523 03:18:01.856167       1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/xbs 220\nI0523 03:18:02.056396       1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/lkss 429\nI0523 03:18:02.256375       1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/7xn 216\nI0523 03:18:02.456425       1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/qjq4 331\nI0523 03:18:02.656423       1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/kt6 360\n"
I0523 04:09:32.982] STEP: limiting log lines
I0523 04:09:32.982] May 23 03:18:02.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-142995523 logs logs-generator logs-generator --namespace=kubectl-223 --tail=1'
I0523 04:09:32.982] May 23 03:18:02.862: INFO: stderr: ""
I0523 04:09:32.982] May 23 03:18:02.862: INFO: stdout: "I0523 03:18:02.656423       1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/kt6 360\nI0523 03:18:02.856388       1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/smk 274\n"
I0523 04:09:32.982] May 23 03:18:02.862: INFO: got output "I0523 03:18:02.656423       1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/kt6 360\nI0523 03:18:02.856388       1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/smk 274\n"
I0523 04:09:32.983] May 23 03:18:02.862: FAIL: Expected
I0523 04:09:32.983]     <int>: 2
I0523 04:09:32.983] to equal
I0523 04:09:32.983]     <int>: 1
I0523 04:09:32.983] 
I0523 04:09:32.983] Full Stack Trace
I0523 04:09:32.983] k8s.io/kubernetes/test/e2e/kubectl.glob..func1.23.3()
... skipping 93 lines ...
I0523 04:09:33.009]         <int>: 2
I0523 04:09:33.009]     to equal
I0523 04:09:33.010]         <int>: 1
I0523 04:09:33.010] 
I0523 04:09:33.010]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1439
I0523 04:09:33.010] ------------------------------
I0523 04:09:33.010] {"msg":"FAILED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":292,"completed":107,"skipped":1694,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.010] SSSSSSSSSSSSSSSSSSSS
I0523 04:09:33.010] ------------------------------
I0523 04:09:33.010] [sig-api-machinery] Garbage collector 
I0523 04:09:33.011]   should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
I0523 04:09:33.011]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.011] [BeforeEach] [sig-api-machinery] Garbage collector
... skipping 41 lines ...
I0523 04:09:33.016] 
I0523 04:09:33.016] W0523 03:18:08.019225      17 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
I0523 04:09:33.016] [AfterEach] [sig-api-machinery] Garbage collector
I0523 04:09:33.017]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.017] May 23 03:18:08.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.017] STEP: Destroying namespace "gc-9574" for this suite.
I0523 04:09:33.017] •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":292,"completed":108,"skipped":1714,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.017] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:33.017] ------------------------------
I0523 04:09:33.017] [k8s.io] Probing container 
I0523 04:09:33.017]   should have monotonically increasing restart count [NodeConformance] [Conformance]
I0523 04:09:33.018]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.018] [BeforeEach] [k8s.io] Probing container
... skipping 32 lines ...
I0523 04:09:33.022] • [SLOW TEST:148.387 seconds]
I0523 04:09:33.023] [k8s.io] Probing container
I0523 04:09:33.023] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
I0523 04:09:33.023]   should have monotonically increasing restart count [NodeConformance] [Conformance]
I0523 04:09:33.023]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.023] ------------------------------
I0523 04:09:33.023] {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":292,"completed":109,"skipped":1753,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.024] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:33.024] ------------------------------
I0523 04:09:33.024] [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
I0523 04:09:33.024]   should execute poststart exec hook properly [NodeConformance] [Conformance]
I0523 04:09:33.024]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.024] [BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 35 lines ...
I0523 04:09:33.030] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
I0523 04:09:33.030]   when create a pod with lifecycle hook
I0523 04:09:33.030]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
I0523 04:09:33.030]     should execute poststart exec hook properly [NodeConformance] [Conformance]
I0523 04:09:33.031]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.031] ------------------------------
I0523 04:09:33.031] {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":292,"completed":110,"skipped":1794,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.031] SSSSS
I0523 04:09:33.031] ------------------------------
I0523 04:09:33.031] [sig-api-machinery] Watchers 
I0523 04:09:33.031]   should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
I0523 04:09:33.032]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.032] [BeforeEach] [sig-api-machinery] Watchers
... skipping 36 lines ...
I0523 04:09:33.039] • [SLOW TEST:10.163 seconds]
I0523 04:09:33.040] [sig-api-machinery] Watchers
I0523 04:09:33.040] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:33.040]   should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
I0523 04:09:33.040]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.040] ------------------------------
I0523 04:09:33.040] {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":292,"completed":111,"skipped":1799,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.041] SSSSSSS
I0523 04:09:33.041] ------------------------------
I0523 04:09:33.041] [k8s.io] Security Context When creating a container with runAsUser 
I0523 04:09:33.041]   should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:33.041]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.041] [BeforeEach] [k8s.io] Security Context
... skipping 10 lines ...
I0523 04:09:33.043] I0523 03:20:56.886869      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.043] [BeforeEach] [k8s.io] Security Context
I0523 04:09:33.043]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
I0523 04:09:33.043] I0523 03:20:56.889087      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.044] [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:33.044]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.044] May 23 03:20:56.893: INFO: Waiting up to 5m0s for pod "busybox-user-65534-a1f79d5a-84ce-4d49-89b8-115e46b6b737" in namespace "security-context-test-7512" to be "Succeeded or Failed"
I0523 04:09:33.044] May 23 03:20:56.895: INFO: Pod "busybox-user-65534-a1f79d5a-84ce-4d49-89b8-115e46b6b737": Phase="Pending", Reason="", readiness=false. Elapsed: 1.765786ms
I0523 04:09:33.044] May 23 03:20:58.898: INFO: Pod "busybox-user-65534-a1f79d5a-84ce-4d49-89b8-115e46b6b737": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004767758s
I0523 04:09:33.045] May 23 03:21:00.901: INFO: Pod "busybox-user-65534-a1f79d5a-84ce-4d49-89b8-115e46b6b737": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007758938s
I0523 04:09:33.045] May 23 03:21:00.901: INFO: Pod "busybox-user-65534-a1f79d5a-84ce-4d49-89b8-115e46b6b737" satisfied condition "Succeeded or Failed"
I0523 04:09:33.045] [AfterEach] [k8s.io] Security Context
I0523 04:09:33.045]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.045] May 23 03:21:00.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.045] STEP: Destroying namespace "security-context-test-7512" for this suite.
I0523 04:09:33.046] •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":112,"skipped":1806,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.046] S
I0523 04:09:33.046] ------------------------------
I0523 04:09:33.046] [sig-scheduling] SchedulerPredicates [Serial] 
I0523 04:09:33.046]   validates that NodeSelector is respected if not matching  [Conformance]
I0523 04:09:33.046]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.047] [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 44 lines ...
I0523 04:09:33.054] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
I0523 04:09:33.055]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.055] May 23 03:21:02.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.055] STEP: Destroying namespace "sched-pred-7197" for this suite.
I0523 04:09:33.055] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
I0523 04:09:33.055]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
I0523 04:09:33.055] I0523 03:21:02.068653      17 request.go:821] Error in request: resource name may not be empty
I0523 04:09:33.056] •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":292,"completed":113,"skipped":1807,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.056] SSSSSS
I0523 04:09:33.056] ------------------------------
I0523 04:09:33.056] [k8s.io] Security Context When creating a pod with privileged 
I0523 04:09:33.056]   should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:33.056]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.057] [BeforeEach] [k8s.io] Security Context
... skipping 10 lines ...
I0523 04:09:33.058] I0523 03:21:02.191654      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.059] [BeforeEach] [k8s.io] Security Context
I0523 04:09:33.059]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
I0523 04:09:33.059] [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:33.059]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.059] I0523 03:21:02.194148      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.059] May 23 03:21:02.199: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-e3670b7e-9329-4598-b616-c9c7c912e23a" in namespace "security-context-test-792" to be "Succeeded or Failed"
I0523 04:09:33.060] May 23 03:21:02.201: INFO: Pod "busybox-privileged-false-e3670b7e-9329-4598-b616-c9c7c912e23a": Phase="Pending", Reason="", readiness=false. Elapsed: 1.762182ms
I0523 04:09:33.060] May 23 03:21:04.203: INFO: Pod "busybox-privileged-false-e3670b7e-9329-4598-b616-c9c7c912e23a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004419041s
I0523 04:09:33.060] May 23 03:21:04.203: INFO: Pod "busybox-privileged-false-e3670b7e-9329-4598-b616-c9c7c912e23a" satisfied condition "Succeeded or Failed"
I0523 04:09:33.060] May 23 03:21:04.208: INFO: Got logs for pod "busybox-privileged-false-e3670b7e-9329-4598-b616-c9c7c912e23a": "ip: RTNETLINK answers: Operation not permitted\n"
I0523 04:09:33.060] [AfterEach] [k8s.io] Security Context
I0523 04:09:33.060]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.061] May 23 03:21:04.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.061] STEP: Destroying namespace "security-context-test-792" for this suite.
I0523 04:09:33.061] •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":114,"skipped":1813,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.061] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:33.061] ------------------------------
I0523 04:09:33.061] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
I0523 04:09:33.062]   should mutate custom resource with different stored version [Conformance]
I0523 04:09:33.062]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.062] [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 38 lines ...
I0523 04:09:33.068] • [SLOW TEST:6.725 seconds]
I0523 04:09:33.069] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0523 04:09:33.069] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:33.069]   should mutate custom resource with different stored version [Conformance]
I0523 04:09:33.069]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.069] ------------------------------
I0523 04:09:33.070] {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":292,"completed":115,"skipped":1851,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.070] SSSSSSSSSS
I0523 04:09:33.070] ------------------------------
I0523 04:09:33.070] [sig-scheduling] SchedulerPredicates [Serial] 
I0523 04:09:33.070]   validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
I0523 04:09:33.070]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.070] [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 48 lines ...
I0523 04:09:33.080]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.080] May 23 03:21:19.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.080] STEP: Destroying namespace "sched-pred-7139" for this suite.
I0523 04:09:33.080] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
I0523 04:09:33.081]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
I0523 04:09:33.081] 
I0523 04:09:33.081] I0523 03:21:19.151639      17 request.go:821] Error in request: resource name may not be empty
I0523 04:09:33.081] • [SLOW TEST:8.212 seconds]
I0523 04:09:33.081] [sig-scheduling] SchedulerPredicates [Serial]
I0523 04:09:33.081] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
I0523 04:09:33.081]   validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
I0523 04:09:33.082]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.082] ------------------------------
I0523 04:09:33.082] {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":292,"completed":116,"skipped":1861,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.082] S
I0523 04:09:33.082] ------------------------------
I0523 04:09:33.082] [sig-api-machinery] Watchers 
I0523 04:09:33.082]   should be able to start watching from a specific resource version [Conformance]
I0523 04:09:33.083]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.083] [BeforeEach] [sig-api-machinery] Watchers
... skipping 20 lines ...
I0523 04:09:33.087] May 23 03:21:19.291: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-8001 /api/v1/namespaces/watch-8001/configmaps/e2e-watch-test-resource-version afb85a69-054b-4175-b6b6-6afa5f2293ac 12939 0 2020-05-23 03:21:19 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-05-23 03:21:19 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
I0523 04:09:33.087] May 23 03:21:19.291: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-8001 /api/v1/namespaces/watch-8001/configmaps/e2e-watch-test-resource-version afb85a69-054b-4175-b6b6-6afa5f2293ac 12940 0 2020-05-23 03:21:19 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-05-23 03:21:19 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
I0523 04:09:33.087] [AfterEach] [sig-api-machinery] Watchers
I0523 04:09:33.087]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.087] May 23 03:21:19.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.088] STEP: Destroying namespace "watch-8001" for this suite.
I0523 04:09:33.088] •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":292,"completed":117,"skipped":1862,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.088] SSSSSSSSSSSSSSS
I0523 04:09:33.088] ------------------------------
I0523 04:09:33.088] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
I0523 04:09:33.088]   should include webhook resources in discovery documents [Conformance]
I0523 04:09:33.089]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.089] [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 40 lines ...
I0523 04:09:33.096] • [SLOW TEST:5.824 seconds]
I0523 04:09:33.096] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0523 04:09:33.096] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:33.096]   should include webhook resources in discovery documents [Conformance]
I0523 04:09:33.097]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.097] ------------------------------
I0523 04:09:33.097] {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":292,"completed":118,"skipped":1877,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.097] SSSS
I0523 04:09:33.097] ------------------------------
I0523 04:09:33.097] [sig-cli] Kubectl client Kubectl patch 
I0523 04:09:33.097]   should add annotations for pods in rc  [Conformance]
I0523 04:09:33.097]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.098] [BeforeEach] [sig-cli] Kubectl client
... skipping 33 lines ...
I0523 04:09:33.103] May 23 03:21:27.700: INFO: Selector matched 1 pods for map[app:agnhost]
I0523 04:09:33.103] May 23 03:21:27.700: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
I0523 04:09:33.103] [AfterEach] [sig-cli] Kubectl client
I0523 04:09:33.103]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.104] May 23 03:21:27.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.104] STEP: Destroying namespace "kubectl-9468" for this suite.
I0523 04:09:33.104] •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":292,"completed":119,"skipped":1881,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.104] SSSSSS
I0523 04:09:33.104] ------------------------------
I0523 04:09:33.104] [sig-storage] Projected configMap 
I0523 04:09:33.104]   should be consumable from pods in volume [NodeConformance] [Conformance]
I0523 04:09:33.105]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.105] [BeforeEach] [sig-storage] Projected configMap
... skipping 10 lines ...
I0523 04:09:33.107] I0523 03:21:27.832219      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.107] [It] should be consumable from pods in volume [NodeConformance] [Conformance]
I0523 04:09:33.107]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.107] I0523 03:21:27.834633      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.107] STEP: Creating configMap with name projected-configmap-test-volume-5e335b3c-d398-4df9-9618-2fb5b4f28dfe
I0523 04:09:33.107] STEP: Creating a pod to test consume configMaps
I0523 04:09:33.108] May 23 03:21:27.842: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-55b65c19-3d7e-4b9d-a4a4-965a3e95ddc4" in namespace "projected-3748" to be "Succeeded or Failed"
I0523 04:09:33.108] May 23 03:21:27.844: INFO: Pod "pod-projected-configmaps-55b65c19-3d7e-4b9d-a4a4-965a3e95ddc4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.000073ms
I0523 04:09:33.108] May 23 03:21:29.847: INFO: Pod "pod-projected-configmaps-55b65c19-3d7e-4b9d-a4a4-965a3e95ddc4": Phase="Running", Reason="", readiness=true. Elapsed: 2.005003589s
I0523 04:09:33.108] May 23 03:21:31.849: INFO: Pod "pod-projected-configmaps-55b65c19-3d7e-4b9d-a4a4-965a3e95ddc4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007790111s
I0523 04:09:33.108] STEP: Saw pod success
I0523 04:09:33.109] May 23 03:21:31.849: INFO: Pod "pod-projected-configmaps-55b65c19-3d7e-4b9d-a4a4-965a3e95ddc4" satisfied condition "Succeeded or Failed"
I0523 04:09:33.109] May 23 03:21:31.852: INFO: Trying to get logs from node kind-worker pod pod-projected-configmaps-55b65c19-3d7e-4b9d-a4a4-965a3e95ddc4 container projected-configmap-volume-test: <nil>
I0523 04:09:33.109] STEP: delete the pod
I0523 04:09:33.109] May 23 03:21:31.864: INFO: Waiting for pod pod-projected-configmaps-55b65c19-3d7e-4b9d-a4a4-965a3e95ddc4 to disappear
I0523 04:09:33.109] May 23 03:21:31.866: INFO: Pod pod-projected-configmaps-55b65c19-3d7e-4b9d-a4a4-965a3e95ddc4 no longer exists
I0523 04:09:33.109] [AfterEach] [sig-storage] Projected configMap
I0523 04:09:33.109]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.110] May 23 03:21:31.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.110] STEP: Destroying namespace "projected-3748" for this suite.
I0523 04:09:33.110] •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":292,"completed":120,"skipped":1887,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.110] SSSSSSSS
I0523 04:09:33.110] ------------------------------
I0523 04:09:33.110] [sig-scheduling] SchedulerPredicates [Serial] 
I0523 04:09:33.111]   validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
I0523 04:09:33.111]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.111] [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 51 lines ...
I0523 04:09:33.119] STEP: Destroying namespace "sched-pred-8363" for this suite.
I0523 04:09:33.119] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
I0523 04:09:33.119]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
I0523 04:09:33.119] 
I0523 04:09:33.119] • [SLOW TEST:304.211 seconds]
I0523 04:09:33.119] [sig-scheduling] SchedulerPredicates [Serial]
I0523 04:09:33.120] I0523 03:26:36.082968      17 request.go:821] Error in request: resource name may not be empty
I0523 04:09:33.120] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
I0523 04:09:33.120]   validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
I0523 04:09:33.120]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.120] ------------------------------
I0523 04:09:33.121] {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":292,"completed":121,"skipped":1895,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.121] SSSSSSS
I0523 04:09:33.121] ------------------------------
I0523 04:09:33.121] [k8s.io] Variable Expansion 
I0523 04:09:33.121]   should allow substituting values in a container's args [NodeConformance] [Conformance]
I0523 04:09:33.121]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.121] [BeforeEach] [k8s.io] Variable Expansion
... skipping 9 lines ...
I0523 04:09:33.123] I0523 03:26:36.206511      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.123] I0523 03:26:36.206538      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.123] [It] should allow substituting values in a container's args [NodeConformance] [Conformance]
I0523 04:09:33.123]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.124] I0523 03:26:36.208715      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.124] STEP: Creating a pod to test substitution in container's args
I0523 04:09:33.124] May 23 03:26:36.213: INFO: Waiting up to 5m0s for pod "var-expansion-5abd6e0e-d596-4411-a73b-e11a11a698a4" in namespace "var-expansion-100" to be "Succeeded or Failed"
I0523 04:09:33.124] May 23 03:26:36.215: INFO: Pod "var-expansion-5abd6e0e-d596-4411-a73b-e11a11a698a4": Phase="Pending", Reason="", readiness=false. Elapsed: 1.88996ms
I0523 04:09:33.124] I0523 03:26:37.176521      17 reflector.go:514] k8s.io/kubernetes/test/e2e/node/taints.go:146: Watch close - *v1.Pod total 8 items received
I0523 04:09:33.125] May 23 03:26:38.218: INFO: Pod "var-expansion-5abd6e0e-d596-4411-a73b-e11a11a698a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004832839s
I0523 04:09:33.125] STEP: Saw pod success
I0523 04:09:33.125] May 23 03:26:38.218: INFO: Pod "var-expansion-5abd6e0e-d596-4411-a73b-e11a11a698a4" satisfied condition "Succeeded or Failed"
I0523 04:09:33.125] May 23 03:26:38.220: INFO: Trying to get logs from node kind-worker pod var-expansion-5abd6e0e-d596-4411-a73b-e11a11a698a4 container dapi-container: <nil>
I0523 04:09:33.125] STEP: delete the pod
I0523 04:09:33.125] May 23 03:26:38.238: INFO: Waiting for pod var-expansion-5abd6e0e-d596-4411-a73b-e11a11a698a4 to disappear
I0523 04:09:33.125] May 23 03:26:38.240: INFO: Pod var-expansion-5abd6e0e-d596-4411-a73b-e11a11a698a4 no longer exists
I0523 04:09:33.126] [AfterEach] [k8s.io] Variable Expansion
I0523 04:09:33.126]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.126] May 23 03:26:38.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.126] STEP: Destroying namespace "var-expansion-100" for this suite.
I0523 04:09:33.126] •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":292,"completed":122,"skipped":1902,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.126] SSSSSSS
I0523 04:09:33.126] ------------------------------
I0523 04:09:33.127] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
I0523 04:09:33.127]   works for multiple CRDs of same group but different versions [Conformance]
I0523 04:09:33.127]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.127] [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 24 lines ...
I0523 04:09:33.131] • [SLOW TEST:22.916 seconds]
I0523 04:09:33.131] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
I0523 04:09:33.131] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:33.131]   works for multiple CRDs of same group but different versions [Conformance]
I0523 04:09:33.131]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.132] ------------------------------
I0523 04:09:33.132] {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":292,"completed":123,"skipped":1909,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.132] SSSS
I0523 04:09:33.132] ------------------------------
I0523 04:09:33.132] [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath 
I0523 04:09:33.133]   runs ReplicaSets to verify preemption running path [Conformance]
I0523 04:09:33.133]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.133] [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 58 lines ...
I0523 04:09:33.143] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
I0523 04:09:33.143]   PreemptionExecutionPath
I0523 04:09:33.143]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:437
I0523 04:09:33.143]     runs ReplicaSets to verify preemption running path [Conformance]
I0523 04:09:33.143]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.143] ------------------------------
I0523 04:09:33.144] {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":292,"completed":124,"skipped":1913,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.144] S
I0523 04:09:33.144] ------------------------------
I0523 04:09:33.144] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
I0523 04:09:33.144]   listing mutating webhooks should work [Conformance]
I0523 04:09:33.144]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.144] [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 37 lines ...
I0523 04:09:33.150] • [SLOW TEST:5.911 seconds]
I0523 04:09:33.151] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0523 04:09:33.151] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:33.151]   listing mutating webhooks should work [Conformance]
I0523 04:09:33.151]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.151] ------------------------------
I0523 04:09:33.151] {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":292,"completed":125,"skipped":1914,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.152] SSSSSSSSSSSSSSS
I0523 04:09:33.152] ------------------------------
I0523 04:09:33.152] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] 
I0523 04:09:33.152]   evicts pods with minTolerationSeconds [Disruptive] [Conformance]
I0523 04:09:33.152]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.152] [BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
... skipping 35 lines ...
I0523 04:09:33.158] • [SLOW TEST:93.877 seconds]
I0523 04:09:33.158] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
I0523 04:09:33.158] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
I0523 04:09:33.158]   evicts pods with minTolerationSeconds [Disruptive] [Conformance]
I0523 04:09:33.158]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.158] ------------------------------
I0523 04:09:33.159] {"msg":"PASSED [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","total":292,"completed":126,"skipped":1929,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.159] S
I0523 04:09:33.159] ------------------------------
I0523 04:09:33.159] [sig-apps] Job 
I0523 04:09:33.159]   should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
I0523 04:09:33.159]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.160] [BeforeEach] [sig-apps] Job
I0523 04:09:33.160]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
I0523 04:09:33.160] STEP: Creating a kubernetes client
I0523 04:09:33.160] May 23 03:30:16.352: INFO: >>> kubeConfig: /tmp/kubeconfig-142995523
I0523 04:09:33.160] STEP: Building a namespace api object, basename job
I0523 04:09:33.160] I0523 03:30:16.356840      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.161] I0523 03:30:16.356970      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.161] STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-7309
I0523 04:09:33.161] I0523 03:30:16.381497      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.161] STEP: Waiting for a default service account to be provisioned in namespace
I0523 04:09:33.161] I0523 03:30:16.486468      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.161] I0523 03:30:16.486517      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.162] [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
I0523 04:09:33.162]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.162] I0523 03:30:16.488728      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.162] STEP: Creating a job
I0523 04:09:33.162] STEP: Ensuring job reaches completions
I0523 04:09:33.162] [AfterEach] [sig-apps] Job
I0523 04:09:33.162]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.163] May 23 03:30:24.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.163] STEP: Destroying namespace "job-7309" for this suite.
I0523 04:09:33.163] 
I0523 04:09:33.163] • [SLOW TEST:8.151 seconds]
I0523 04:09:33.163] [sig-apps] Job
I0523 04:09:33.163] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0523 04:09:33.163]   should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
I0523 04:09:33.163]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.164] ------------------------------
I0523 04:09:33.164] {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":292,"completed":127,"skipped":1930,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.164] SSSSSSSSSS
I0523 04:09:33.164] ------------------------------
I0523 04:09:33.164] [sig-api-machinery] Garbage collector 
I0523 04:09:33.164]   should orphan pods created by rc if delete options say so [Conformance]
I0523 04:09:33.164]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.164] [BeforeEach] [sig-api-machinery] Garbage collector
... skipping 58 lines ...
I0523 04:09:33.172] • [SLOW TEST:40.313 seconds]
I0523 04:09:33.172] [sig-api-machinery] Garbage collector
I0523 04:09:33.172] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:33.172]   should orphan pods created by rc if delete options say so [Conformance]
I0523 04:09:33.173]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.173] ------------------------------
I0523 04:09:33.173] {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":292,"completed":128,"skipped":1940,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.173] SSSS
I0523 04:09:33.173] ------------------------------
I0523 04:09:33.173] [sig-cli] Kubectl client Proxy server 
I0523 04:09:33.173]   should support proxy with --port 0  [Conformance]
I0523 04:09:33.174]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.174] [BeforeEach] [sig-cli] Kubectl client
... skipping 17 lines ...
I0523 04:09:33.176] May 23 03:31:04.957: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-142995523 proxy -p 0 --disable-filter'
I0523 04:09:33.176] STEP: curling proxy /api/ output
I0523 04:09:33.177] [AfterEach] [sig-cli] Kubectl client
I0523 04:09:33.177]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.177] May 23 03:31:05.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.177] STEP: Destroying namespace "kubectl-6633" for this suite.
I0523 04:09:33.177] •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":292,"completed":129,"skipped":1944,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.177] SSSSS
I0523 04:09:33.178] ------------------------------
I0523 04:09:33.178] [sig-storage] EmptyDir volumes 
I0523 04:09:33.178]   should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:33.178]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.178] [BeforeEach] [sig-storage] EmptyDir volumes
... skipping 9 lines ...
I0523 04:09:33.180] I0523 03:31:05.208831      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.180] I0523 03:31:05.209025      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.180] I0523 03:31:05.212178      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.181] [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:33.181]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.181] STEP: Creating a pod to test emptydir 0666 on node default medium
I0523 04:09:33.181] May 23 03:31:05.220: INFO: Waiting up to 5m0s for pod "pod-9903d60f-32ce-44c0-978d-bc97a4ec3c2f" in namespace "emptydir-1790" to be "Succeeded or Failed"
I0523 04:09:33.181] May 23 03:31:05.223: INFO: Pod "pod-9903d60f-32ce-44c0-978d-bc97a4ec3c2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.564603ms
I0523 04:09:33.181] May 23 03:31:07.226: INFO: Pod "pod-9903d60f-32ce-44c0-978d-bc97a4ec3c2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005371334s
I0523 04:09:33.182] STEP: Saw pod success
I0523 04:09:33.182] May 23 03:31:07.226: INFO: Pod "pod-9903d60f-32ce-44c0-978d-bc97a4ec3c2f" satisfied condition "Succeeded or Failed"
I0523 04:09:33.182] May 23 03:31:07.228: INFO: Trying to get logs from node kind-worker pod pod-9903d60f-32ce-44c0-978d-bc97a4ec3c2f container test-container: <nil>
I0523 04:09:33.182] STEP: delete the pod
I0523 04:09:33.182] May 23 03:31:07.246: INFO: Waiting for pod pod-9903d60f-32ce-44c0-978d-bc97a4ec3c2f to disappear
I0523 04:09:33.182] May 23 03:31:07.249: INFO: Pod pod-9903d60f-32ce-44c0-978d-bc97a4ec3c2f no longer exists
I0523 04:09:33.182] [AfterEach] [sig-storage] EmptyDir volumes
I0523 04:09:33.183]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.183] May 23 03:31:07.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.183] STEP: Destroying namespace "emptydir-1790" for this suite.
I0523 04:09:33.183] •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":130,"skipped":1949,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.183] SSSSS
I0523 04:09:33.183] ------------------------------
I0523 04:09:33.183] [k8s.io] [sig-node] Events 
I0523 04:09:33.184]   should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
I0523 04:09:33.184]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.184] [BeforeEach] [k8s.io] [sig-node] Events
... skipping 30 lines ...
I0523 04:09:33.192] • [SLOW TEST:6.157 seconds]
I0523 04:09:33.192] [k8s.io] [sig-node] Events
I0523 04:09:33.192] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
I0523 04:09:33.193]   should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
I0523 04:09:33.193]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.193] ------------------------------
I0523 04:09:33.193] {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":292,"completed":131,"skipped":1954,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.193] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
I0523 04:09:33.193]   works for CRD preserving unknown fields in an embedded object [Conformance]
I0523 04:09:33.193]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.194] [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
I0523 04:09:33.194]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
I0523 04:09:33.194] STEP: Creating a kubernetes client
... skipping 35 lines ...
I0523 04:09:33.201] • [SLOW TEST:5.845 seconds]
I0523 04:09:33.201] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
I0523 04:09:33.201] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:33.201]   works for CRD preserving unknown fields in an embedded object [Conformance]
I0523 04:09:33.201]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.201] ------------------------------
I0523 04:09:33.202] {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":292,"completed":132,"skipped":1954,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.202] SS
I0523 04:09:33.202] ------------------------------
I0523 04:09:33.202] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
I0523 04:09:33.202]   works for multiple CRDs of same group and version but different kinds [Conformance]
I0523 04:09:33.202]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.203] [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 22 lines ...
I0523 04:09:33.207] • [SLOW TEST:14.049 seconds]
I0523 04:09:33.207] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
I0523 04:09:33.207] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:33.207]   works for multiple CRDs of same group and version but different kinds [Conformance]
I0523 04:09:33.207]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.207] ------------------------------
I0523 04:09:33.208] {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":292,"completed":133,"skipped":1956,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.208] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:33.208] ------------------------------
I0523 04:09:33.208] [sig-network] Services 
I0523 04:09:33.208]   should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
I0523 04:09:33.208]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.208] [BeforeEach] [sig-network] Services
... skipping 97 lines ...
I0523 04:09:33.228] • [SLOW TEST:23.352 seconds]
I0523 04:09:33.229] [sig-network] Services
I0523 04:09:33.229] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
I0523 04:09:33.229]   should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
I0523 04:09:33.229]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.229] ------------------------------
I0523 04:09:33.230] {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":292,"completed":134,"skipped":1988,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.230] [sig-network] Services 
I0523 04:09:33.230]   should be able to change the type from NodePort to ExternalName [Conformance]
I0523 04:09:33.230]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.230] [BeforeEach] [sig-network] Services
I0523 04:09:33.230]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
I0523 04:09:33.231] STEP: Creating a kubernetes client
... skipping 44 lines ...
I0523 04:09:33.238] • [SLOW TEST:20.020 seconds]
I0523 04:09:33.238] [sig-network] Services
I0523 04:09:33.238] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
I0523 04:09:33.238]   should be able to change the type from NodePort to ExternalName [Conformance]
I0523 04:09:33.238]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.238] ------------------------------
I0523 04:09:33.239] {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":292,"completed":135,"skipped":1988,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.239] SS
I0523 04:09:33.239] ------------------------------
I0523 04:09:33.239] [sig-node] ConfigMap 
I0523 04:09:33.239]   should be consumable via environment variable [NodeConformance] [Conformance]
I0523 04:09:33.239]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.239] [BeforeEach] [sig-node] ConfigMap
... skipping 10 lines ...
I0523 04:09:33.241] I0523 03:32:16.806661      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.241] [It] should be consumable via environment variable [NodeConformance] [Conformance]
I0523 04:09:33.241]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.242] I0523 03:32:16.808795      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.242] STEP: Creating configMap configmap-1111/configmap-test-9633b55f-258c-4982-82f6-a3cb42981548
I0523 04:09:33.242] STEP: Creating a pod to test consume configMaps
I0523 04:09:33.242] May 23 03:32:16.816: INFO: Waiting up to 5m0s for pod "pod-configmaps-3f108de8-bc26-42fc-95a8-ff58e78c5e71" in namespace "configmap-1111" to be "Succeeded or Failed"
I0523 04:09:33.242] May 23 03:32:16.818: INFO: Pod "pod-configmaps-3f108de8-bc26-42fc-95a8-ff58e78c5e71": Phase="Pending", Reason="", readiness=false. Elapsed: 1.984749ms
I0523 04:09:33.242] May 23 03:32:18.821: INFO: Pod "pod-configmaps-3f108de8-bc26-42fc-95a8-ff58e78c5e71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004919016s
I0523 04:09:33.243] STEP: Saw pod success
I0523 04:09:33.243] May 23 03:32:18.821: INFO: Pod "pod-configmaps-3f108de8-bc26-42fc-95a8-ff58e78c5e71" satisfied condition "Succeeded or Failed"
I0523 04:09:33.243] May 23 03:32:18.823: INFO: Trying to get logs from node kind-worker pod pod-configmaps-3f108de8-bc26-42fc-95a8-ff58e78c5e71 container env-test: <nil>
I0523 04:09:33.243] STEP: delete the pod
I0523 04:09:33.243] May 23 03:32:18.843: INFO: Waiting for pod pod-configmaps-3f108de8-bc26-42fc-95a8-ff58e78c5e71 to disappear
I0523 04:09:33.243] May 23 03:32:18.845: INFO: Pod pod-configmaps-3f108de8-bc26-42fc-95a8-ff58e78c5e71 no longer exists
I0523 04:09:33.243] [AfterEach] [sig-node] ConfigMap
I0523 04:09:33.244]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.244] May 23 03:32:18.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.244] STEP: Destroying namespace "configmap-1111" for this suite.
I0523 04:09:33.244] •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":292,"completed":136,"skipped":1990,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.244] 
I0523 04:09:33.244] ------------------------------
I0523 04:09:33.244] [sig-storage] EmptyDir volumes 
I0523 04:09:33.244]   pod should support shared volumes between containers [Conformance]
I0523 04:09:33.245]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.245] [BeforeEach] [sig-storage] EmptyDir volumes
... skipping 19 lines ...
I0523 04:09:33.248] May 23 03:32:22.994: INFO: >>> kubeConfig: /tmp/kubeconfig-142995523
I0523 04:09:33.248] May 23 03:32:23.090: INFO: Exec stderr: ""
I0523 04:09:33.248] [AfterEach] [sig-storage] EmptyDir volumes
I0523 04:09:33.248]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.249] May 23 03:32:23.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.249] STEP: Destroying namespace "emptydir-9060" for this suite.
I0523 04:09:33.249] •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":292,"completed":137,"skipped":1990,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.249] SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:33.249] ------------------------------
I0523 04:09:33.249] [sig-auth] ServiceAccounts 
I0523 04:09:33.249]   should mount an API token into pods  [Conformance]
I0523 04:09:33.250]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.250] [BeforeEach] [sig-auth] ServiceAccounts
... skipping 19 lines ...
I0523 04:09:33.253] STEP: reading a file in the container
I0523 04:09:33.253] May 23 03:32:26.139: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1030 pod-service-account-fe12967e-7db7-49ee-93fc-9e92849b4d57 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
I0523 04:09:33.253] [AfterEach] [sig-auth] ServiceAccounts
I0523 04:09:33.254]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.254] May 23 03:32:26.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.254] STEP: Destroying namespace "svcaccounts-1030" for this suite.
I0523 04:09:33.254] •{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":292,"completed":138,"skipped":2019,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.254] SSSSSS
I0523 04:09:33.254] ------------------------------
I0523 04:09:33.254] [sig-network] Services 
I0523 04:09:33.254]   should serve a basic endpoint from pods  [Conformance]
I0523 04:09:33.255]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.255] [BeforeEach] [sig-network] Services
... skipping 12 lines ...
I0523 04:09:33.257]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:808
I0523 04:09:33.257] [It] should serve a basic endpoint from pods  [Conformance]
I0523 04:09:33.257]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.257] I0523 03:32:26.461983      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.258] STEP: creating service endpoint-test2 in namespace services-8673
I0523 04:09:33.258] STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8673 to expose endpoints map[]
I0523 04:09:33.258] May 23 03:32:26.472: INFO: Get endpoints failed (3.48502ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
I0523 04:09:33.258] May 23 03:32:27.475: INFO: successfully validated that service endpoint-test2 in namespace services-8673 exposes endpoints map[] (1.006610337s elapsed)
I0523 04:09:33.258] STEP: Creating pod pod1 in namespace services-8673
I0523 04:09:33.259] STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8673 to expose endpoints map[pod1:[80]]
I0523 04:09:33.259] May 23 03:32:29.496: INFO: successfully validated that service endpoint-test2 in namespace services-8673 exposes endpoints map[pod1:[80]] (2.015859377s elapsed)
I0523 04:09:33.259] STEP: Creating pod pod2 in namespace services-8673
I0523 04:09:33.259] STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8673 to expose endpoints map[pod1:[80] pod2:[80]]
... skipping 14 lines ...
I0523 04:09:33.262] • [SLOW TEST:7.232 seconds]
I0523 04:09:33.262] [sig-network] Services
I0523 04:09:33.262] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
I0523 04:09:33.262]   should serve a basic endpoint from pods  [Conformance]
I0523 04:09:33.262]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.262] ------------------------------
I0523 04:09:33.263] {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":292,"completed":139,"skipped":2025,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.263] SSSS
I0523 04:09:33.263] ------------------------------
I0523 04:09:33.263] [sig-network] Services 
I0523 04:09:33.263]   should provide secure master service  [Conformance]
I0523 04:09:33.263]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.263] [BeforeEach] [sig-network] Services
... skipping 16 lines ...
I0523 04:09:33.266] [AfterEach] [sig-network] Services
I0523 04:09:33.266]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.266] May 23 03:32:33.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.266] STEP: Destroying namespace "services-1204" for this suite.
I0523 04:09:33.266] [AfterEach] [sig-network] Services
I0523 04:09:33.266]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:812
I0523 04:09:33.267] •{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":292,"completed":140,"skipped":2029,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.267] SSSS
I0523 04:09:33.267] ------------------------------
I0523 04:09:33.267] [k8s.io] Pods 
I0523 04:09:33.267]   should get a host IP [NodeConformance] [Conformance]
I0523 04:09:33.267]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.267] [BeforeEach] [k8s.io] Pods
... skipping 16 lines ...
I0523 04:09:33.270] STEP: creating pod
I0523 04:09:33.270] May 23 03:32:35.847: INFO: Pod pod-hostip-d62e7d16-11b2-48e9-a287-19f8a4791ab0 has hostIP: 172.17.0.4
I0523 04:09:33.271] [AfterEach] [k8s.io] Pods
I0523 04:09:33.271]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.271] May 23 03:32:35.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.271] STEP: Destroying namespace "pods-8456" for this suite.
I0523 04:09:33.271] •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":292,"completed":141,"skipped":2033,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.271] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:33.272] ------------------------------
I0523 04:09:33.272] [k8s.io] Variable Expansion 
I0523 04:09:33.272]   should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]
I0523 04:09:33.272]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.272] [BeforeEach] [k8s.io] Variable Expansion
... skipping 50 lines ...
I0523 04:09:33.279] • [SLOW TEST:165.061 seconds]
I0523 04:09:33.279] [k8s.io] Variable Expansion
I0523 04:09:33.280] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
I0523 04:09:33.280]   should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]
I0523 04:09:33.280]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.280] ------------------------------
I0523 04:09:33.280] {"msg":"PASSED [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]","total":292,"completed":142,"skipped":2082,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.280] SSSSSSSSSSS
I0523 04:09:33.280] ------------------------------
I0523 04:09:33.281] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
I0523 04:09:33.281]   custom resource defaulting for requests and from storage works  [Conformance]
I0523 04:09:33.281]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.281] [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 13 lines ...
I0523 04:09:33.283] I0523 03:35:21.042109      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.284] May 23 03:35:21.042: INFO: >>> kubeConfig: /tmp/kubeconfig-142995523
I0523 04:09:33.284] [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
I0523 04:09:33.284]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.284] May 23 03:35:22.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.284] STEP: Destroying namespace "custom-resource-definition-803" for this suite.
I0523 04:09:33.285] •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":292,"completed":143,"skipped":2093,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.285] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:33.285] ------------------------------
I0523 04:09:33.285] [sig-storage] ConfigMap 
I0523 04:09:33.285]   should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:33.285]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.285] [BeforeEach] [sig-storage] ConfigMap
... skipping 10 lines ...
I0523 04:09:33.287] I0523 03:35:22.325725      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.287] [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:33.288]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.288] I0523 03:35:22.328284      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.288] STEP: Creating configMap with name configmap-test-volume-map-a8c69ce7-a6d3-43ed-9b1e-b1ec17b6aab1
I0523 04:09:33.288] STEP: Creating a pod to test consume configMaps
I0523 04:09:33.288] May 23 03:35:22.335: INFO: Waiting up to 5m0s for pod "pod-configmaps-5cb71133-cada-4d59-a537-6aefed70b80d" in namespace "configmap-722" to be "Succeeded or Failed"
I0523 04:09:33.289] May 23 03:35:22.338: INFO: Pod "pod-configmaps-5cb71133-cada-4d59-a537-6aefed70b80d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.099936ms
I0523 04:09:33.289] May 23 03:35:24.341: INFO: Pod "pod-configmaps-5cb71133-cada-4d59-a537-6aefed70b80d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006051027s
I0523 04:09:33.289] May 23 03:35:26.344: INFO: Pod "pod-configmaps-5cb71133-cada-4d59-a537-6aefed70b80d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008760925s
I0523 04:09:33.289] STEP: Saw pod success
I0523 04:09:33.289] May 23 03:35:26.344: INFO: Pod "pod-configmaps-5cb71133-cada-4d59-a537-6aefed70b80d" satisfied condition "Succeeded or Failed"
I0523 04:09:33.290] May 23 03:35:26.346: INFO: Trying to get logs from node kind-worker pod pod-configmaps-5cb71133-cada-4d59-a537-6aefed70b80d container configmap-volume-test: <nil>
I0523 04:09:33.290] STEP: delete the pod
I0523 04:09:33.290] May 23 03:35:26.365: INFO: Waiting for pod pod-configmaps-5cb71133-cada-4d59-a537-6aefed70b80d to disappear
I0523 04:09:33.290] May 23 03:35:26.367: INFO: Pod pod-configmaps-5cb71133-cada-4d59-a537-6aefed70b80d no longer exists
I0523 04:09:33.290] [AfterEach] [sig-storage] ConfigMap
I0523 04:09:33.290]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.290] May 23 03:35:26.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.291] STEP: Destroying namespace "configmap-722" for this suite.
I0523 04:09:33.291] •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":144,"skipped":2132,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.291] S
I0523 04:09:33.291] ------------------------------
I0523 04:09:33.291] [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
I0523 04:09:33.291]   Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
I0523 04:09:33.292]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.292] [BeforeEach] [sig-apps] StatefulSet
... skipping 158 lines ...
I0523 04:09:33.325] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0523 04:09:33.325]   [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
I0523 04:09:33.326]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
I0523 04:09:33.326]     Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
I0523 04:09:33.326]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.326] ------------------------------
I0523 04:09:33.326] {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":292,"completed":145,"skipped":2133,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.326] SSSSSSS
I0523 04:09:33.326] ------------------------------
I0523 04:09:33.326] [sig-node] Downward API 
I0523 04:09:33.327]   should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
I0523 04:09:33.327]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.327] [BeforeEach] [sig-node] Downward API
... skipping 9 lines ...
I0523 04:09:33.328] I0523 03:36:28.146169      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.329] I0523 03:36:28.146197      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.329] [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
I0523 04:09:33.329]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.329] I0523 03:36:28.148416      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.330] STEP: Creating a pod to test downward api env vars
I0523 04:09:33.330] May 23 03:36:28.153: INFO: Waiting up to 5m0s for pod "downward-api-a6199662-2b43-4fc8-9ff6-89fb0f0f8159" in namespace "downward-api-9328" to be "Succeeded or Failed"
I0523 04:09:33.330] May 23 03:36:28.155: INFO: Pod "downward-api-a6199662-2b43-4fc8-9ff6-89fb0f0f8159": Phase="Pending", Reason="", readiness=false. Elapsed: 2.003486ms
I0523 04:09:33.330] May 23 03:36:30.159: INFO: Pod "downward-api-a6199662-2b43-4fc8-9ff6-89fb0f0f8159": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00513782s
I0523 04:09:33.330] STEP: Saw pod success
I0523 04:09:33.330] May 23 03:36:30.159: INFO: Pod "downward-api-a6199662-2b43-4fc8-9ff6-89fb0f0f8159" satisfied condition "Succeeded or Failed"
I0523 04:09:33.331] May 23 03:36:30.161: INFO: Trying to get logs from node kind-worker pod downward-api-a6199662-2b43-4fc8-9ff6-89fb0f0f8159 container dapi-container: <nil>
I0523 04:09:33.331] STEP: delete the pod
I0523 04:09:33.331] May 23 03:36:30.173: INFO: Waiting for pod downward-api-a6199662-2b43-4fc8-9ff6-89fb0f0f8159 to disappear
I0523 04:09:33.331] May 23 03:36:30.175: INFO: Pod downward-api-a6199662-2b43-4fc8-9ff6-89fb0f0f8159 no longer exists
I0523 04:09:33.331] [AfterEach] [sig-node] Downward API
I0523 04:09:33.332]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.332] May 23 03:36:30.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.332] STEP: Destroying namespace "downward-api-9328" for this suite.
I0523 04:09:33.332] •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":292,"completed":146,"skipped":2140,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.332] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:33.333] ------------------------------
I0523 04:09:33.333] [sig-scheduling] SchedulerPredicates [Serial] 
I0523 04:09:33.333]   validates resource limits of pods that are allowed to run  [Conformance]
I0523 04:09:33.333]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.333] [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 71 lines ...
I0523 04:09:33.344] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
I0523 04:09:33.344]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.344] May 23 03:36:33.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.344] STEP: Destroying namespace "sched-pred-3249" for this suite.
I0523 04:09:33.345] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
I0523 04:09:33.345]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
I0523 04:09:33.345] •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":292,"completed":147,"skipped":2172,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.345] SI0523 03:36:33.386912      17 request.go:821] Error in request: resource name may not be empty
I0523 04:09:33.345] SSSSSSSSSSSSSSS
I0523 04:09:33.345] ------------------------------
I0523 04:09:33.345] [sig-apps] Daemon set [Serial] 
I0523 04:09:33.345]   should rollback without unnecessary restarts [Conformance]
I0523 04:09:33.346]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.346] [BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 65 lines ...
I0523 04:09:33.355] • [SLOW TEST:13.266 seconds]
I0523 04:09:33.355] [sig-apps] Daemon set [Serial]
I0523 04:09:33.355] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0523 04:09:33.355]   should rollback without unnecessary restarts [Conformance]
I0523 04:09:33.356]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.356] ------------------------------
I0523 04:09:33.356] {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":292,"completed":148,"skipped":2188,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.356] SSS
I0523 04:09:33.356] ------------------------------
I0523 04:09:33.356] [sig-network] Service endpoints latency 
I0523 04:09:33.356]   should not be very high  [Conformance]
I0523 04:09:33.357]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.357] [BeforeEach] [sig-network] Service endpoints latency
... skipping 437 lines ...
I0523 04:09:33.421] • [SLOW TEST:10.861 seconds]
I0523 04:09:33.421] [sig-network] Service endpoints latency
I0523 04:09:33.421] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
I0523 04:09:33.421]   should not be very high  [Conformance]
I0523 04:09:33.422]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.422] ------------------------------
I0523 04:09:33.422] {"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":292,"completed":149,"skipped":2191,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.422] SSSSSSSSSSSS
I0523 04:09:33.422] ------------------------------
I0523 04:09:33.422] [sig-storage] Projected downwardAPI 
I0523 04:09:33.422]   should provide container's memory request [NodeConformance] [Conformance]
I0523 04:09:33.422]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.423] [BeforeEach] [sig-storage] Projected downwardAPI
... skipping 11 lines ...
I0523 04:09:33.424] [It] should provide container's memory request [NodeConformance] [Conformance]
I0523 04:09:33.425]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.425] I0523 03:36:57.645159      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.425] I0523 03:36:57.645235      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.425] I0523 03:36:57.647750      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.425] STEP: Creating a pod to test downward API volume plugin
I0523 04:09:33.425] May 23 03:36:57.654: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c9615f11-b8d5-4946-a760-0fda3eaf671f" in namespace "projected-4114" to be "Succeeded or Failed"
I0523 04:09:33.426] May 23 03:36:57.656: INFO: Pod "downwardapi-volume-c9615f11-b8d5-4946-a760-0fda3eaf671f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224824ms
I0523 04:09:33.426] May 23 03:36:59.660: INFO: Pod "downwardapi-volume-c9615f11-b8d5-4946-a760-0fda3eaf671f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005749555s
I0523 04:09:33.426] STEP: Saw pod success
I0523 04:09:33.426] May 23 03:36:59.660: INFO: Pod "downwardapi-volume-c9615f11-b8d5-4946-a760-0fda3eaf671f" satisfied condition "Succeeded or Failed"
I0523 04:09:33.426] May 23 03:36:59.662: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-c9615f11-b8d5-4946-a760-0fda3eaf671f container client-container: <nil>
I0523 04:09:33.426] STEP: delete the pod
I0523 04:09:33.426] May 23 03:36:59.674: INFO: Waiting for pod downwardapi-volume-c9615f11-b8d5-4946-a760-0fda3eaf671f to disappear
I0523 04:09:33.426] May 23 03:36:59.676: INFO: Pod downwardapi-volume-c9615f11-b8d5-4946-a760-0fda3eaf671f no longer exists
I0523 04:09:33.426] [AfterEach] [sig-storage] Projected downwardAPI
I0523 04:09:33.427]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.427] May 23 03:36:59.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.427] STEP: Destroying namespace "projected-4114" for this suite.
I0523 04:09:33.427] •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":292,"completed":150,"skipped":2203,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.427] 
I0523 04:09:33.427] ------------------------------
I0523 04:09:33.427] [sig-storage] Secrets 
I0523 04:09:33.427]   should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
I0523 04:09:33.428]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.428] [BeforeEach] [sig-storage] Secrets
... skipping 14 lines ...
I0523 04:09:33.430] I0523 03:36:59.820789      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.431] I0523 03:36:59.820811      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.431] STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secret-namespace-2705
I0523 04:09:33.431] I0523 03:36:59.837934      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.431] STEP: Creating secret with name secret-test-a5d580eb-b52b-4147-a6f3-23b3e1ea2d23
I0523 04:09:33.431] STEP: Creating a pod to test consume secrets
I0523 04:09:33.432] May 23 03:36:59.954: INFO: Waiting up to 5m0s for pod "pod-secrets-dc454e6f-2e6e-49f7-9fb0-732082a0a0d5" in namespace "secrets-6741" to be "Succeeded or Failed"
I0523 04:09:33.432] May 23 03:36:59.957: INFO: Pod "pod-secrets-dc454e6f-2e6e-49f7-9fb0-732082a0a0d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.8096ms
I0523 04:09:33.432] May 23 03:37:01.960: INFO: Pod "pod-secrets-dc454e6f-2e6e-49f7-9fb0-732082a0a0d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005549196s
I0523 04:09:33.432] STEP: Saw pod success
I0523 04:09:33.432] May 23 03:37:01.960: INFO: Pod "pod-secrets-dc454e6f-2e6e-49f7-9fb0-732082a0a0d5" satisfied condition "Succeeded or Failed"
I0523 04:09:33.433] May 23 03:37:01.962: INFO: Trying to get logs from node kind-worker pod pod-secrets-dc454e6f-2e6e-49f7-9fb0-732082a0a0d5 container secret-volume-test: <nil>
I0523 04:09:33.433] STEP: delete the pod
I0523 04:09:33.433] May 23 03:37:01.978: INFO: Waiting for pod pod-secrets-dc454e6f-2e6e-49f7-9fb0-732082a0a0d5 to disappear
I0523 04:09:33.433] May 23 03:37:01.981: INFO: Pod pod-secrets-dc454e6f-2e6e-49f7-9fb0-732082a0a0d5 no longer exists
I0523 04:09:33.433] [AfterEach] [sig-storage] Secrets
I0523 04:09:33.433]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.433] May 23 03:37:01.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.433] STEP: Destroying namespace "secrets-6741" for this suite.
I0523 04:09:33.434] STEP: Destroying namespace "secret-namespace-2705" for this suite.
I0523 04:09:33.434] •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":292,"completed":151,"skipped":2203,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.434] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:33.434] ------------------------------
I0523 04:09:33.434] [sig-storage] Projected secret 
I0523 04:09:33.434]   should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
I0523 04:09:33.434]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.434] [BeforeEach] [sig-storage] Projected secret
... skipping 10 lines ...
I0523 04:09:33.436] I0523 03:37:02.111416      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.436] [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
I0523 04:09:33.436]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.436] STEP: Creating projection with secret that has name projected-secret-test-map-a923936d-9f95-494c-b73d-e1c976f4bed1
I0523 04:09:33.437] I0523 03:37:02.113853      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.437] STEP: Creating a pod to test consume secrets
I0523 04:09:33.437] May 23 03:37:02.122: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a5706bf0-9fbc-4fcb-b82c-1c0068076575" in namespace "projected-6529" to be "Succeeded or Failed"
I0523 04:09:33.437] May 23 03:37:02.124: INFO: Pod "pod-projected-secrets-a5706bf0-9fbc-4fcb-b82c-1c0068076575": Phase="Pending", Reason="", readiness=false. Elapsed: 1.799136ms
I0523 04:09:33.437] May 23 03:37:04.127: INFO: Pod "pod-projected-secrets-a5706bf0-9fbc-4fcb-b82c-1c0068076575": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004345261s
I0523 04:09:33.437] STEP: Saw pod success
I0523 04:09:33.438] May 23 03:37:04.127: INFO: Pod "pod-projected-secrets-a5706bf0-9fbc-4fcb-b82c-1c0068076575" satisfied condition "Succeeded or Failed"
I0523 04:09:33.438] May 23 03:37:04.129: INFO: Trying to get logs from node kind-worker pod pod-projected-secrets-a5706bf0-9fbc-4fcb-b82c-1c0068076575 container projected-secret-volume-test: <nil>
I0523 04:09:33.438] STEP: delete the pod
I0523 04:09:33.438] May 23 03:37:04.146: INFO: Waiting for pod pod-projected-secrets-a5706bf0-9fbc-4fcb-b82c-1c0068076575 to disappear
I0523 04:09:33.438] May 23 03:37:04.150: INFO: Pod pod-projected-secrets-a5706bf0-9fbc-4fcb-b82c-1c0068076575 no longer exists
I0523 04:09:33.439] [AfterEach] [sig-storage] Projected secret
I0523 04:09:33.439]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.439] May 23 03:37:04.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.439] STEP: Destroying namespace "projected-6529" for this suite.
I0523 04:09:33.439] •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":292,"completed":152,"skipped":2244,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.440] SSSSSSSSSSSSSSS
I0523 04:09:33.440] ------------------------------
I0523 04:09:33.440] [sig-node] Downward API 
I0523 04:09:33.440]   should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
I0523 04:09:33.440]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.440] [BeforeEach] [sig-node] Downward API
... skipping 9 lines ...
I0523 04:09:33.442] I0523 03:37:04.292655      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.442] I0523 03:37:04.292758      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.442] I0523 03:37:04.295159      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.442] [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
I0523 04:09:33.442]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.443] STEP: Creating a pod to test downward api env vars
I0523 04:09:33.443] May 23 03:37:04.306: INFO: Waiting up to 5m0s for pod "downward-api-28f1b6fd-aa74-49fc-81d9-74aff3fdfdfb" in namespace "downward-api-5262" to be "Succeeded or Failed"
I0523 04:09:33.443] May 23 03:37:04.318: INFO: Pod "downward-api-28f1b6fd-aa74-49fc-81d9-74aff3fdfdfb": Phase="Pending", Reason="", readiness=false. Elapsed: 12.108039ms
I0523 04:09:33.443] May 23 03:37:06.321: INFO: Pod "downward-api-28f1b6fd-aa74-49fc-81d9-74aff3fdfdfb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014831037s
I0523 04:09:33.444] May 23 03:37:08.323: INFO: Pod "downward-api-28f1b6fd-aa74-49fc-81d9-74aff3fdfdfb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017305029s
I0523 04:09:33.444] STEP: Saw pod success
I0523 04:09:33.444] May 23 03:37:08.323: INFO: Pod "downward-api-28f1b6fd-aa74-49fc-81d9-74aff3fdfdfb" satisfied condition "Succeeded or Failed"
I0523 04:09:33.444] May 23 03:37:08.325: INFO: Trying to get logs from node kind-worker pod downward-api-28f1b6fd-aa74-49fc-81d9-74aff3fdfdfb container dapi-container: <nil>
I0523 04:09:33.444] STEP: delete the pod
I0523 04:09:33.444] May 23 03:37:08.339: INFO: Waiting for pod downward-api-28f1b6fd-aa74-49fc-81d9-74aff3fdfdfb to disappear
I0523 04:09:33.445] May 23 03:37:08.341: INFO: Pod downward-api-28f1b6fd-aa74-49fc-81d9-74aff3fdfdfb no longer exists
I0523 04:09:33.445] [AfterEach] [sig-node] Downward API
I0523 04:09:33.445]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.445] May 23 03:37:08.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.445] STEP: Destroying namespace "downward-api-5262" for this suite.
I0523 04:09:33.446] •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":292,"completed":153,"skipped":2259,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.446] SSS
I0523 04:09:33.446] ------------------------------
I0523 04:09:33.446] [sig-api-machinery] Events 
I0523 04:09:33.446]   should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
I0523 04:09:33.446]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.446] [BeforeEach] [sig-api-machinery] Events
... skipping 18 lines ...
I0523 04:09:33.449] STEP: deleting the test event
I0523 04:09:33.449] STEP: listing all events in all namespaces
I0523 04:09:33.450] [AfterEach] [sig-api-machinery] Events
I0523 04:09:33.450]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.450] May 23 03:37:08.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.450] STEP: Destroying namespace "events-1519" for this suite.
I0523 04:09:33.450] •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":292,"completed":154,"skipped":2262,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.450] S
I0523 04:09:33.450] ------------------------------
I0523 04:09:33.451] [sig-cli] Kubectl client Kubectl api-versions 
I0523 04:09:33.451]   should check if v1 is in available api versions  [Conformance]
I0523 04:09:33.451]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.451] [BeforeEach] [sig-cli] Kubectl client
... skipping 18 lines ...
I0523 04:09:33.454] May 23 03:37:08.719: INFO: stderr: ""
I0523 04:09:33.455] May 23 03:37:08.719: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
I0523 04:09:33.455] [AfterEach] [sig-cli] Kubectl client
I0523 04:09:33.455]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.456] May 23 03:37:08.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.456] STEP: Destroying namespace "kubectl-977" for this suite.
I0523 04:09:33.456] •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":292,"completed":155,"skipped":2263,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.456] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:33.456] ------------------------------
I0523 04:09:33.456] [k8s.io] Container Runtime blackbox test on terminated container 
I0523 04:09:33.457]   should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
I0523 04:09:33.457]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.457] [BeforeEach] [k8s.io] Container Runtime
... skipping 19 lines ...
I0523 04:09:33.460] May 23 03:37:10.865: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
I0523 04:09:33.460] STEP: delete the container
I0523 04:09:33.460] [AfterEach] [k8s.io] Container Runtime
I0523 04:09:33.460]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.460] May 23 03:37:10.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.460] STEP: Destroying namespace "container-runtime-5577" for this suite.
I0523 04:09:33.461] •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":292,"completed":156,"skipped":2323,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.461] SSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:33.461] ------------------------------
I0523 04:09:33.461] [sig-storage] ConfigMap 
I0523 04:09:33.461]   should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
I0523 04:09:33.461]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.461] [BeforeEach] [sig-storage] ConfigMap
... skipping 10 lines ...
I0523 04:09:33.463] I0523 03:37:11.001954      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.463] [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
I0523 04:09:33.463]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.463] STEP: Creating configMap with name configmap-test-volume-a64efc97-ea30-4a52-8de8-70f0d15acf85
I0523 04:09:33.464] I0523 03:37:11.004035      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.464] STEP: Creating a pod to test consume configMaps
I0523 04:09:33.464] May 23 03:37:11.010: INFO: Waiting up to 5m0s for pod "pod-configmaps-5bb947d8-854e-499f-b9c0-f34fb820352b" in namespace "configmap-5038" to be "Succeeded or Failed"
I0523 04:09:33.464] May 23 03:37:11.013: INFO: Pod "pod-configmaps-5bb947d8-854e-499f-b9c0-f34fb820352b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152235ms
I0523 04:09:33.464] May 23 03:37:13.016: INFO: Pod "pod-configmaps-5bb947d8-854e-499f-b9c0-f34fb820352b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005073127s
I0523 04:09:33.464] STEP: Saw pod success
I0523 04:09:33.464] May 23 03:37:13.016: INFO: Pod "pod-configmaps-5bb947d8-854e-499f-b9c0-f34fb820352b" satisfied condition "Succeeded or Failed"
I0523 04:09:33.465] May 23 03:37:13.018: INFO: Trying to get logs from node kind-worker pod pod-configmaps-5bb947d8-854e-499f-b9c0-f34fb820352b container configmap-volume-test: <nil>
I0523 04:09:33.465] STEP: delete the pod
I0523 04:09:33.465] May 23 03:37:13.032: INFO: Waiting for pod pod-configmaps-5bb947d8-854e-499f-b9c0-f34fb820352b to disappear
I0523 04:09:33.465] May 23 03:37:13.034: INFO: Pod pod-configmaps-5bb947d8-854e-499f-b9c0-f34fb820352b no longer exists
I0523 04:09:33.465] [AfterEach] [sig-storage] ConfigMap
I0523 04:09:33.465]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.466] May 23 03:37:13.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.466] STEP: Destroying namespace "configmap-5038" for this suite.
I0523 04:09:33.466] •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":292,"completed":157,"skipped":2345,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.466] SSSSSSSSSSSSSSSSSS
I0523 04:09:33.466] ------------------------------
I0523 04:09:33.466] [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
I0523 04:09:33.467]   should execute poststart http hook properly [NodeConformance] [Conformance]
I0523 04:09:33.467]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.467] [BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 33 lines ...
I0523 04:09:33.472] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
I0523 04:09:33.472]   when create a pod with lifecycle hook
I0523 04:09:33.472]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
I0523 04:09:33.473]     should execute poststart http hook properly [NodeConformance] [Conformance]
I0523 04:09:33.473]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.473] ------------------------------
I0523 04:09:33.473] {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":292,"completed":158,"skipped":2363,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.473] SSSSSSSSSSSS
I0523 04:09:33.473] ------------------------------
I0523 04:09:33.473] [sig-apps] ReplicationController 
I0523 04:09:33.473]   should release no longer matching pods [Conformance]
I0523 04:09:33.474]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.474] [BeforeEach] [sig-apps] ReplicationController
... skipping 26 lines ...
I0523 04:09:33.477] • [SLOW TEST:6.161 seconds]
I0523 04:09:33.477] [sig-apps] ReplicationController
I0523 04:09:33.478] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0523 04:09:33.478]   should release no longer matching pods [Conformance]
I0523 04:09:33.478]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.478] ------------------------------
I0523 04:09:33.478] {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":292,"completed":159,"skipped":2375,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.478] S
I0523 04:09:33.478] ------------------------------
I0523 04:09:33.479] [k8s.io] InitContainer [NodeConformance] 
I0523 04:09:33.479]   should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0523 04:09:33.479]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.479] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0523 04:09:33.479]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
I0523 04:09:33.479] STEP: Creating a kubernetes client
I0523 04:09:33.479] May 23 03:37:27.372: INFO: >>> kubeConfig: /tmp/kubeconfig-142995523
I0523 04:09:33.480] STEP: Building a namespace api object, basename init-container
... skipping 4 lines ...
I0523 04:09:33.481] STEP: Waiting for a default service account to be provisioned in namespace
I0523 04:09:33.481] I0523 03:37:27.498996      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.481] I0523 03:37:27.499023      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.481] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0523 04:09:33.481]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162
I0523 04:09:33.481] I0523 03:37:27.501705      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.482] [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0523 04:09:33.482]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.482] STEP: creating the pod
I0523 04:09:33.482] May 23 03:37:27.501: INFO: PodSpec: initContainers in spec.initContainers
I0523 04:09:33.482] I0523 03:37:27.508491      17 retrywatcher.go:247] Starting RetryWatcher.
I0523 04:09:33.482] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0523 04:09:33.482]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.483] May 23 03:37:30.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.483] I0523 03:37:30.697787      17 retrywatcher.go:147] Stopping RetryWatcher.
I0523 04:09:33.483] I0523 03:37:30.697881      17 retrywatcher.go:275] Stopping RetryWatcher.
I0523 04:09:33.483] STEP: Destroying namespace "init-container-2236" for this suite.
I0523 04:09:33.483] •{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":292,"completed":160,"skipped":2376,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.483] SSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:33.483] ------------------------------
I0523 04:09:33.483] [sig-cli] Kubectl client Kubectl version 
I0523 04:09:33.483]   should check is all data is printed  [Conformance]
I0523 04:09:33.483]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.484] [BeforeEach] [sig-cli] Kubectl client
... skipping 17 lines ...
I0523 04:09:33.486] May 23 03:37:30.921: INFO: stderr: ""
I0523 04:09:33.486] May 23 03:37:30.921: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-beta.0.135+f01d848c4808bd\", GitCommit:\"f01d848c4808bdaaa1378511c343a63a650f8cf1\", GitTreeState:\"clean\", BuildDate:\"2020-05-22T16:57:10Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-beta.0.135+f01d848c4808bd\", GitCommit:\"f01d848c4808bdaaa1378511c343a63a650f8cf1\", GitTreeState:\"clean\", BuildDate:\"2020-05-22T16:57:10Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
I0523 04:09:33.486] [AfterEach] [sig-cli] Kubectl client
I0523 04:09:33.486]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.487] May 23 03:37:30.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.487] STEP: Destroying namespace "kubectl-9215" for this suite.
I0523 04:09:33.487] •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":292,"completed":161,"skipped":2401,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.487] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:33.487] ------------------------------
I0523 04:09:33.487] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
I0523 04:09:33.487]   updates the published spec when one version gets renamed [Conformance]
I0523 04:09:33.488]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.488] [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 25 lines ...
I0523 04:09:33.492] • [SLOW TEST:17.681 seconds]
I0523 04:09:33.492] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
I0523 04:09:33.492] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:33.492]   updates the published spec when one version gets renamed [Conformance]
I0523 04:09:33.492]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.492] ------------------------------
I0523 04:09:33.493] {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":292,"completed":162,"skipped":2445,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.493] SSSSSSSSS
I0523 04:09:33.493] ------------------------------
I0523 04:09:33.493] [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
I0523 04:09:33.493]   should execute prestop exec hook properly [NodeConformance] [Conformance]
I0523 04:09:33.493]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.493] [BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 33 lines ...
I0523 04:09:33.498] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
I0523 04:09:33.498]   when create a pod with lifecycle hook
I0523 04:09:33.498]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
I0523 04:09:33.498]     should execute prestop exec hook properly [NodeConformance] [Conformance]
I0523 04:09:33.498]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.498] ------------------------------
I0523 04:09:33.499] {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":292,"completed":163,"skipped":2454,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.499] SSSSSSSSSSSSSSS
I0523 04:09:33.499] ------------------------------
I0523 04:09:33.499] [sig-apps] Deployment 
I0523 04:09:33.499]   deployment should support rollover [Conformance]
I0523 04:09:33.499]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.499] [BeforeEach] [sig-apps] Deployment
... skipping 58 lines ...
I0523 04:09:33.525] • [SLOW TEST:21.230 seconds]
I0523 04:09:33.525] [sig-apps] Deployment
I0523 04:09:33.525] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0523 04:09:33.525]   deployment should support rollover [Conformance]
I0523 04:09:33.526]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.526] ------------------------------
I0523 04:09:33.526] {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":292,"completed":164,"skipped":2469,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.526] SSSSSSSSSSSS
I0523 04:09:33.526] ------------------------------
I0523 04:09:33.526] [sig-storage] ConfigMap 
I0523 04:09:33.526]   updates should be reflected in volume [NodeConformance] [Conformance]
I0523 04:09:33.527]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.527] [BeforeEach] [sig-storage] ConfigMap
... skipping 16 lines ...
I0523 04:09:33.530] STEP: Updating configmap configmap-test-upd-e6a0da4f-aeef-4c96-896d-3d1a8b5959f0
I0523 04:09:33.530] STEP: waiting to observe update in volume
I0523 04:09:33.530] [AfterEach] [sig-storage] ConfigMap
I0523 04:09:33.530]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.530] May 23 03:38:26.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.530] STEP: Destroying namespace "configmap-1990" for this suite.
I0523 04:09:33.530] •{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":165,"skipped":2481,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.531] SSS
I0523 04:09:33.531] ------------------------------
I0523 04:09:33.531] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
I0523 04:09:33.531]   should mutate pod and apply defaults after mutation [Conformance]
I0523 04:09:33.531]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.531] [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 35 lines ...
I0523 04:09:33.537] • [SLOW TEST:6.030 seconds]
I0523 04:09:33.537] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0523 04:09:33.537] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:33.537]   should mutate pod and apply defaults after mutation [Conformance]
I0523 04:09:33.538]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.538] ------------------------------
I0523 04:09:33.538] {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":292,"completed":166,"skipped":2484,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.538] SSS
I0523 04:09:33.538] ------------------------------
I0523 04:09:33.538] [k8s.io] Probing container 
I0523 04:09:33.538]   should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
I0523 04:09:33.539]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.539] [BeforeEach] [k8s.io] Probing container
... skipping 27 lines ...
I0523 04:09:33.543] • [SLOW TEST:52.237 seconds]
I0523 04:09:33.543] [k8s.io] Probing container
I0523 04:09:33.543] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
I0523 04:09:33.543]   should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
I0523 04:09:33.543]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.544] ------------------------------
I0523 04:09:33.544] {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":292,"completed":167,"skipped":2487,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.544] SSSSS
I0523 04:09:33.544] ------------------------------
I0523 04:09:33.544] [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
I0523 04:09:33.544]   should be possible to delete [NodeConformance] [Conformance]
I0523 04:09:33.544]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.545] [BeforeEach] [k8s.io] Kubelet
... skipping 16 lines ...
I0523 04:09:33.547] [It] should be possible to delete [NodeConformance] [Conformance]
I0523 04:09:33.547]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.547] [AfterEach] [k8s.io] Kubelet
I0523 04:09:33.547]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.547] May 23 03:39:24.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.547] STEP: Destroying namespace "kubelet-test-80" for this suite.
I0523 04:09:33.548] •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":292,"completed":168,"skipped":2492,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.548] SSSSSSSSSSSSSS
I0523 04:09:33.548] ------------------------------
I0523 04:09:33.548] [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
I0523 04:09:33.548]   should execute prestop http hook properly [NodeConformance] [Conformance]
I0523 04:09:33.548]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.548] [BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 33 lines ...
I0523 04:09:33.553] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
I0523 04:09:33.553]   when create a pod with lifecycle hook
I0523 04:09:33.554]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
I0523 04:09:33.554]     should execute prestop http hook properly [NodeConformance] [Conformance]
I0523 04:09:33.554]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.554] ------------------------------
I0523 04:09:33.554] {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":292,"completed":169,"skipped":2506,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.554] SSSSSSSSSSSSSSSS
I0523 04:09:33.554] ------------------------------
I0523 04:09:33.555] [sig-storage] Downward API volume 
I0523 04:09:33.555]   should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
I0523 04:09:33.555]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.555] [BeforeEach] [sig-storage] Downward API volume
... skipping 11 lines ...
I0523 04:09:33.557] [BeforeEach] [sig-storage] Downward API volume
I0523 04:09:33.557]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
I0523 04:09:33.557] [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
I0523 04:09:33.558]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.558] I0523 03:39:36.905247      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.558] STEP: Creating a pod to test downward API volume plugin
I0523 04:09:33.558] May 23 03:39:36.910: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2aae8af9-bb76-432c-91b0-19c5aaa1eb1b" in namespace "downward-api-9750" to be "Succeeded or Failed"
I0523 04:09:33.558] May 23 03:39:36.912: INFO: Pod "downwardapi-volume-2aae8af9-bb76-432c-91b0-19c5aaa1eb1b": Phase="Pending", Reason="", readiness=false. Elapsed: 1.956144ms
I0523 04:09:33.559] May 23 03:39:38.915: INFO: Pod "downwardapi-volume-2aae8af9-bb76-432c-91b0-19c5aaa1eb1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005064554s
I0523 04:09:33.559] May 23 03:39:40.918: INFO: Pod "downwardapi-volume-2aae8af9-bb76-432c-91b0-19c5aaa1eb1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008109489s
I0523 04:09:33.559] STEP: Saw pod success
I0523 04:09:33.559] May 23 03:39:40.918: INFO: Pod "downwardapi-volume-2aae8af9-bb76-432c-91b0-19c5aaa1eb1b" satisfied condition "Succeeded or Failed"
I0523 04:09:33.559] May 23 03:39:40.920: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-2aae8af9-bb76-432c-91b0-19c5aaa1eb1b container client-container: <nil>
I0523 04:09:33.559] STEP: delete the pod
I0523 04:09:33.559] May 23 03:39:40.931: INFO: Waiting for pod downwardapi-volume-2aae8af9-bb76-432c-91b0-19c5aaa1eb1b to disappear
I0523 04:09:33.560] May 23 03:39:40.933: INFO: Pod downwardapi-volume-2aae8af9-bb76-432c-91b0-19c5aaa1eb1b no longer exists
I0523 04:09:33.560] [AfterEach] [sig-storage] Downward API volume
I0523 04:09:33.560]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.560] May 23 03:39:40.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.560] STEP: Destroying namespace "downward-api-9750" for this suite.
I0523 04:09:33.560] •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":292,"completed":170,"skipped":2522,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.560] S
I0523 04:09:33.561] ------------------------------
I0523 04:09:33.561] [sig-network] Networking Granular Checks: Pods 
I0523 04:09:33.561]   should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:33.561]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.561] [BeforeEach] [sig-network] Networking
... skipping 43 lines ...
I0523 04:09:33.568] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
I0523 04:09:33.568]   Granular Checks: Pods
I0523 04:09:33.568]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
I0523 04:09:33.568]     should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:33.568]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.568] ------------------------------
I0523 04:09:33.569] {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":171,"skipped":2523,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.569] SSS
I0523 04:09:33.569] ------------------------------
I0523 04:09:33.569] [k8s.io] Kubelet when scheduling a read only busybox container 
I0523 04:09:33.569]   should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:33.569]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.569] [BeforeEach] [k8s.io] Kubelet
... skipping 14 lines ...
I0523 04:09:33.572] [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:33.572]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.572] [AfterEach] [k8s.io] Kubelet
I0523 04:09:33.573]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.573] May 23 03:40:03.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.573] STEP: Destroying namespace "kubelet-test-4239" for this suite.
I0523 04:09:33.573] •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":172,"skipped":2526,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.573] SSSSSSSSSSSSSSSSS
I0523 04:09:33.573] ------------------------------
I0523 04:09:33.574] [sig-api-machinery] Secrets 
I0523 04:09:33.574]   should be consumable via the environment [NodeConformance] [Conformance]
I0523 04:09:33.574]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.574] [BeforeEach] [sig-api-machinery] Secrets
... skipping 10 lines ...
I0523 04:09:33.576] I0523 03:40:03.618332      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.576] [It] should be consumable via the environment [NodeConformance] [Conformance]
I0523 04:09:33.576]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.576] I0523 03:40:03.620602      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.576] STEP: creating secret secrets-4396/secret-test-99b49965-a558-4162-a66d-dacf1a3fd128
I0523 04:09:33.576] STEP: Creating a pod to test consume secrets
I0523 04:09:33.577] May 23 03:40:03.628: INFO: Waiting up to 5m0s for pod "pod-configmaps-403f4fad-1aad-4ee8-ad7b-d384839ce1ef" in namespace "secrets-4396" to be "Succeeded or Failed"
I0523 04:09:33.577] May 23 03:40:03.630: INFO: Pod "pod-configmaps-403f4fad-1aad-4ee8-ad7b-d384839ce1ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074217ms
I0523 04:09:33.577] May 23 03:40:05.633: INFO: Pod "pod-configmaps-403f4fad-1aad-4ee8-ad7b-d384839ce1ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004976319s
I0523 04:09:33.577] STEP: Saw pod success
I0523 04:09:33.577] May 23 03:40:05.633: INFO: Pod "pod-configmaps-403f4fad-1aad-4ee8-ad7b-d384839ce1ef" satisfied condition "Succeeded or Failed"
I0523 04:09:33.578] May 23 03:40:05.635: INFO: Trying to get logs from node kind-worker pod pod-configmaps-403f4fad-1aad-4ee8-ad7b-d384839ce1ef container env-test: <nil>
I0523 04:09:33.578] STEP: delete the pod
I0523 04:09:33.578] May 23 03:40:05.647: INFO: Waiting for pod pod-configmaps-403f4fad-1aad-4ee8-ad7b-d384839ce1ef to disappear
I0523 04:09:33.578] May 23 03:40:05.649: INFO: Pod pod-configmaps-403f4fad-1aad-4ee8-ad7b-d384839ce1ef no longer exists
I0523 04:09:33.578] [AfterEach] [sig-api-machinery] Secrets
I0523 04:09:33.578]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.579] May 23 03:40:05.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.579] STEP: Destroying namespace "secrets-4396" for this suite.
I0523 04:09:33.579] •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":292,"completed":173,"skipped":2543,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.579] SSSSSSSSSSSSSS
I0523 04:09:33.579] ------------------------------
I0523 04:09:33.579] [k8s.io] Variable Expansion 
I0523 04:09:33.579]   should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]
I0523 04:09:33.580]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.580] [BeforeEach] [k8s.io] Variable Expansion
I0523 04:09:33.580]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
I0523 04:09:33.580] STEP: Creating a kubernetes client
I0523 04:09:33.580] May 23 03:40:05.654: INFO: >>> kubeConfig: /tmp/kubeconfig-142995523
I0523 04:09:33.580] STEP: Building a namespace api object, basename var-expansion
I0523 04:09:33.581] I0523 03:40:05.658466      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.581] I0523 03:40:05.658493      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.581] STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-8822
I0523 04:09:33.581] I0523 03:40:05.671956      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.581] STEP: Waiting for a default service account to be provisioned in namespace
I0523 04:09:33.582] I0523 03:40:05.776833      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.582] I0523 03:40:05.776855      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.582] [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]
I0523 04:09:33.582]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.582] I0523 03:40:05.779651      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.583] I0523 03:41:06.637452      17 reflector.go:514] k8s.io/kubernetes/test/e2e/node/taints.go:146: Watch close - *v1.Pod total 6 items received
I0523 04:09:33.583] May 23 03:42:05.792: INFO: Deleting pod "var-expansion-949eec4e-a693-4bc4-aa13-84900bca0848" in namespace "var-expansion-8822"
I0523 04:09:33.583] May 23 03:42:05.795: INFO: Wait up to 5m0s for pod "var-expansion-949eec4e-a693-4bc4-aa13-84900bca0848" to be fully deleted
I0523 04:09:33.583] [AfterEach] [k8s.io] Variable Expansion
I0523 04:09:33.583]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.584] May 23 03:42:07.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.584] STEP: Destroying namespace "var-expansion-8822" for this suite.
I0523 04:09:33.584] 
I0523 04:09:33.584] • [SLOW TEST:122.154 seconds]
I0523 04:09:33.584] [k8s.io] Variable Expansion
I0523 04:09:33.584] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
I0523 04:09:33.584]   should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]
I0523 04:09:33.585]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.585] ------------------------------
I0523 04:09:33.585] {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":292,"completed":174,"skipped":2557,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.585] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:33.585] ------------------------------
I0523 04:09:33.585] [sig-api-machinery] Servers with support for Table transformation 
I0523 04:09:33.585]   should return a 406 for a backend which does not implement metadata [Conformance]
I0523 04:09:33.585]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.586] [BeforeEach] [sig-api-machinery] Servers with support for Table transformation
... skipping 14 lines ...
I0523 04:09:33.588] [It] should return a 406 for a backend which does not implement metadata [Conformance]
I0523 04:09:33.588]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.588] [AfterEach] [sig-api-machinery] Servers with support for Table transformation
I0523 04:09:33.589]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.589] May 23 03:42:07.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.589] STEP: Destroying namespace "tables-1722" for this suite.
I0523 04:09:33.589] •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":292,"completed":175,"skipped":2592,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.589] SSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:33.589] ------------------------------
I0523 04:09:33.590] [sig-api-machinery] ResourceQuota 
I0523 04:09:33.590]   should be able to update and delete ResourceQuota. [Conformance]
I0523 04:09:33.590]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.590] [BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 18 lines ...
I0523 04:09:33.593] STEP: Deleting a ResourceQuota
I0523 04:09:33.593] STEP: Verifying the deleted ResourceQuota
I0523 04:09:33.593] [AfterEach] [sig-api-machinery] ResourceQuota
I0523 04:09:33.594]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.594] May 23 03:42:08.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.594] STEP: Destroying namespace "resourcequota-5464" for this suite.
I0523 04:09:33.594] •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":292,"completed":176,"skipped":2615,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.594] SSSSSSSSSSSSSSSS
I0523 04:09:33.594] ------------------------------
I0523 04:09:33.595] [sig-apps] Deployment 
I0523 04:09:33.595]   RollingUpdateDeployment should delete old pods and create new ones [Conformance]
I0523 04:09:33.595]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.595] [BeforeEach] [sig-apps] Deployment
... skipping 41 lines ...
I0523 04:09:33.613] • [SLOW TEST:7.166 seconds]
I0523 04:09:33.614] [sig-apps] Deployment
I0523 04:09:33.614] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0523 04:09:33.614]   RollingUpdateDeployment should delete old pods and create new ones [Conformance]
I0523 04:09:33.614]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.614] ------------------------------
I0523 04:09:33.614] {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":292,"completed":177,"skipped":2631,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.615] SSSSSSSSSSSSSSSSSSSS
I0523 04:09:33.615] ------------------------------
I0523 04:09:33.615] [sig-storage] Projected downwardAPI 
I0523 04:09:33.615]   should provide container's cpu request [NodeConformance] [Conformance]
I0523 04:09:33.615]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.615] [BeforeEach] [sig-storage] Projected downwardAPI
... skipping 11 lines ...
I0523 04:09:33.617] [BeforeEach] [sig-storage] Projected downwardAPI
I0523 04:09:33.617]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
I0523 04:09:33.617] I0523 03:42:15.389168      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.618] [It] should provide container's cpu request [NodeConformance] [Conformance]
I0523 04:09:33.618]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.618] STEP: Creating a pod to test downward API volume plugin
I0523 04:09:33.618] May 23 03:42:15.395: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dc6eaa72-b1da-4f5a-bf41-c89ca606b027" in namespace "projected-1709" to be "Succeeded or Failed"
I0523 04:09:33.618] May 23 03:42:15.397: INFO: Pod "downwardapi-volume-dc6eaa72-b1da-4f5a-bf41-c89ca606b027": Phase="Pending", Reason="", readiness=false. Elapsed: 2.33324ms
I0523 04:09:33.619] May 23 03:42:17.400: INFO: Pod "downwardapi-volume-dc6eaa72-b1da-4f5a-bf41-c89ca606b027": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005521169s
I0523 04:09:33.619] STEP: Saw pod success
I0523 04:09:33.619] May 23 03:42:17.400: INFO: Pod "downwardapi-volume-dc6eaa72-b1da-4f5a-bf41-c89ca606b027" satisfied condition "Succeeded or Failed"
I0523 04:09:33.619] May 23 03:42:17.403: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-dc6eaa72-b1da-4f5a-bf41-c89ca606b027 container client-container: <nil>
I0523 04:09:33.619] STEP: delete the pod
I0523 04:09:33.620] May 23 03:42:17.421: INFO: Waiting for pod downwardapi-volume-dc6eaa72-b1da-4f5a-bf41-c89ca606b027 to disappear
I0523 04:09:33.620] May 23 03:42:17.422: INFO: Pod downwardapi-volume-dc6eaa72-b1da-4f5a-bf41-c89ca606b027 no longer exists
I0523 04:09:33.620] [AfterEach] [sig-storage] Projected downwardAPI
I0523 04:09:33.620]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.620] May 23 03:42:17.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.620] STEP: Destroying namespace "projected-1709" for this suite.
I0523 04:09:33.621] •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":292,"completed":178,"skipped":2651,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.621] SSSSSS
I0523 04:09:33.621] ------------------------------
I0523 04:09:33.621] [sig-apps] ReplicationController 
I0523 04:09:33.621]   should serve a basic image on each replica with a public image  [Conformance]
I0523 04:09:33.621]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.621] [BeforeEach] [sig-apps] ReplicationController
... skipping 28 lines ...
I0523 04:09:33.627] • [SLOW TEST:10.151 seconds]
I0523 04:09:33.627] [sig-apps] ReplicationController
I0523 04:09:33.627] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0523 04:09:33.627]   should serve a basic image on each replica with a public image  [Conformance]
I0523 04:09:33.628]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.628] ------------------------------
I0523 04:09:33.628] {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":292,"completed":179,"skipped":2657,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.628] SSSSSSSSSSSSSS
I0523 04:09:33.628] ------------------------------
I0523 04:09:33.628] [k8s.io] Docker Containers 
I0523 04:09:33.629]   should be able to override the image's default command and arguments [NodeConformance] [Conformance]
I0523 04:09:33.629]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.629] [BeforeEach] [k8s.io] Docker Containers
... skipping 9 lines ...
I0523 04:09:33.631] I0523 03:42:27.706430      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.631] I0523 03:42:27.706454      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.631] [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
I0523 04:09:33.631]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.632] STEP: Creating a pod to test override all
I0523 04:09:33.632] I0523 03:42:27.708855      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.632] May 23 03:42:27.713: INFO: Waiting up to 5m0s for pod "client-containers-db403d79-3dc0-4d0a-b33a-775065a24b0c" in namespace "containers-667" to be "Succeeded or Failed"
I0523 04:09:33.632] May 23 03:42:27.715: INFO: Pod "client-containers-db403d79-3dc0-4d0a-b33a-775065a24b0c": Phase="Pending", Reason="", readiness=false. Elapsed: 1.749236ms
I0523 04:09:33.632] May 23 03:42:29.719: INFO: Pod "client-containers-db403d79-3dc0-4d0a-b33a-775065a24b0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005301571s
I0523 04:09:33.632] STEP: Saw pod success
I0523 04:09:33.632] May 23 03:42:29.719: INFO: Pod "client-containers-db403d79-3dc0-4d0a-b33a-775065a24b0c" satisfied condition "Succeeded or Failed"
I0523 04:09:33.633] May 23 03:42:29.721: INFO: Trying to get logs from node kind-worker pod client-containers-db403d79-3dc0-4d0a-b33a-775065a24b0c container test-container: <nil>
I0523 04:09:33.633] STEP: delete the pod
I0523 04:09:33.633] May 23 03:42:29.733: INFO: Waiting for pod client-containers-db403d79-3dc0-4d0a-b33a-775065a24b0c to disappear
I0523 04:09:33.633] May 23 03:42:29.735: INFO: Pod client-containers-db403d79-3dc0-4d0a-b33a-775065a24b0c no longer exists
I0523 04:09:33.633] [AfterEach] [k8s.io] Docker Containers
I0523 04:09:33.633]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.634] May 23 03:42:29.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.634] STEP: Destroying namespace "containers-667" for this suite.
I0523 04:09:33.634] •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":292,"completed":180,"skipped":2671,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.634] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:33.634] ------------------------------
I0523 04:09:33.634] [k8s.io] Docker Containers 
I0523 04:09:33.635]   should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
I0523 04:09:33.635]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.635] [BeforeEach] [k8s.io] Docker Containers
... skipping 9 lines ...
I0523 04:09:33.637] I0523 03:42:29.863046      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.637] I0523 03:42:29.863065      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.637] [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
I0523 04:09:33.637]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.637] I0523 03:42:29.865286      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.638] STEP: Creating a pod to test override command
I0523 04:09:33.638] May 23 03:42:29.870: INFO: Waiting up to 5m0s for pod "client-containers-c184629d-58e0-4618-8850-b802b6285801" in namespace "containers-5972" to be "Succeeded or Failed"
I0523 04:09:33.638] May 23 03:42:29.872: INFO: Pod "client-containers-c184629d-58e0-4618-8850-b802b6285801": Phase="Pending", Reason="", readiness=false. Elapsed: 1.977287ms
I0523 04:09:33.638] May 23 03:42:31.875: INFO: Pod "client-containers-c184629d-58e0-4618-8850-b802b6285801": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005138483s
I0523 04:09:33.638] STEP: Saw pod success
I0523 04:09:33.639] May 23 03:42:31.875: INFO: Pod "client-containers-c184629d-58e0-4618-8850-b802b6285801" satisfied condition "Succeeded or Failed"
I0523 04:09:33.639] May 23 03:42:31.877: INFO: Trying to get logs from node kind-worker pod client-containers-c184629d-58e0-4618-8850-b802b6285801 container test-container: <nil>
I0523 04:09:33.639] STEP: delete the pod
I0523 04:09:33.639] May 23 03:42:31.887: INFO: Waiting for pod client-containers-c184629d-58e0-4618-8850-b802b6285801 to disappear
I0523 04:09:33.639] May 23 03:42:31.889: INFO: Pod client-containers-c184629d-58e0-4618-8850-b802b6285801 no longer exists
I0523 04:09:33.640] [AfterEach] [k8s.io] Docker Containers
I0523 04:09:33.640]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.640] May 23 03:42:31.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.640] STEP: Destroying namespace "containers-5972" for this suite.
I0523 04:09:33.641] •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":292,"completed":181,"skipped":2711,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.641] 
I0523 04:09:33.641] ------------------------------
I0523 04:09:33.641] [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
I0523 04:09:33.641]   Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
I0523 04:09:33.641]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.642] [BeforeEach] [sig-apps] StatefulSet
... skipping 122 lines ...
I0523 04:09:33.662] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0523 04:09:33.663]   [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
I0523 04:09:33.663]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
I0523 04:09:33.663]     Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
I0523 04:09:33.663]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.663] ------------------------------
I0523 04:09:33.664] {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":292,"completed":182,"skipped":2711,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.664] SSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:33.664] ------------------------------
I0523 04:09:33.664] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
I0523 04:09:33.664]   should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
I0523 04:09:33.664]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.664] [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 40 lines ...
I0523 04:09:33.670] • [SLOW TEST:5.777 seconds]
I0523 04:09:33.670] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0523 04:09:33.670] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:33.671]   should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
I0523 04:09:33.671]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.671] ------------------------------
I0523 04:09:33.671] {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":292,"completed":183,"skipped":2733,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.671] SSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:33.672] ------------------------------
I0523 04:09:33.672] [k8s.io] Pods 
I0523 04:09:33.672]   should be submitted and removed [NodeConformance] [Conformance]
I0523 04:09:33.672]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.672] [BeforeEach] [k8s.io] Pods
... skipping 32 lines ...
I0523 04:09:33.677] • [SLOW TEST:16.624 seconds]
I0523 04:09:33.677] [k8s.io] Pods
I0523 04:09:33.677] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
I0523 04:09:33.678]   should be submitted and removed [NodeConformance] [Conformance]
I0523 04:09:33.678]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.678] ------------------------------
I0523 04:09:33.678] {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":292,"completed":184,"skipped":2760,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.679] SSSSSSSSSSSS
I0523 04:09:33.679] ------------------------------
I0523 04:09:33.679] [sig-network] DNS 
I0523 04:09:33.679]   should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
I0523 04:09:33.679]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.679] [BeforeEach] [sig-network] DNS
... skipping 33 lines ...
I0523 04:09:33.692] May 23 03:44:28.538: INFO: Unable to read jessie_udp@dns-test-service.dns-6409 from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.692] May 23 03:44:28.540: INFO: Unable to read jessie_tcp@dns-test-service.dns-6409 from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.692] May 23 03:44:28.542: INFO: Unable to read jessie_udp@dns-test-service.dns-6409.svc from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.693] May 23 03:44:28.544: INFO: Unable to read jessie_tcp@dns-test-service.dns-6409.svc from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.693] May 23 03:44:28.546: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6409.svc from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.693] May 23 03:44:28.548: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6409.svc from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.694] May 23 03:44:28.562: INFO: Lookups using dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6409 wheezy_tcp@dns-test-service.dns-6409 wheezy_udp@dns-test-service.dns-6409.svc wheezy_tcp@dns-test-service.dns-6409.svc wheezy_udp@_http._tcp.dns-test-service.dns-6409.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6409.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6409 jessie_tcp@dns-test-service.dns-6409 jessie_udp@dns-test-service.dns-6409.svc jessie_tcp@dns-test-service.dns-6409.svc jessie_udp@_http._tcp.dns-test-service.dns-6409.svc jessie_tcp@_http._tcp.dns-test-service.dns-6409.svc]
I0523 04:09:33.694] 
I0523 04:09:33.694] May 23 03:44:33.566: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.695] May 23 03:44:33.568: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.695] May 23 03:44:33.571: INFO: Unable to read wheezy_udp@dns-test-service.dns-6409 from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.695] May 23 03:44:33.573: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6409 from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.696] May 23 03:44:33.575: INFO: Unable to read wheezy_udp@dns-test-service.dns-6409.svc from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
... skipping 5 lines ...
I0523 04:09:33.698] May 23 03:44:33.600: INFO: Unable to read jessie_udp@dns-test-service.dns-6409 from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.698] May 23 03:44:33.602: INFO: Unable to read jessie_tcp@dns-test-service.dns-6409 from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.698] May 23 03:44:33.604: INFO: Unable to read jessie_udp@dns-test-service.dns-6409.svc from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.699] May 23 03:44:33.606: INFO: Unable to read jessie_tcp@dns-test-service.dns-6409.svc from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.699] May 23 03:44:33.608: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6409.svc from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.699] May 23 03:44:33.610: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6409.svc from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.700] May 23 03:44:33.622: INFO: Lookups using dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6409 wheezy_tcp@dns-test-service.dns-6409 wheezy_udp@dns-test-service.dns-6409.svc wheezy_tcp@dns-test-service.dns-6409.svc wheezy_udp@_http._tcp.dns-test-service.dns-6409.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6409.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6409 jessie_tcp@dns-test-service.dns-6409 jessie_udp@dns-test-service.dns-6409.svc jessie_tcp@dns-test-service.dns-6409.svc jessie_udp@_http._tcp.dns-test-service.dns-6409.svc jessie_tcp@_http._tcp.dns-test-service.dns-6409.svc]
I0523 04:09:33.700] 
I0523 04:09:33.700] May 23 03:44:38.566: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.701] May 23 03:44:38.569: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.701] May 23 03:44:38.571: INFO: Unable to read wheezy_udp@dns-test-service.dns-6409 from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.701] May 23 03:44:38.574: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6409 from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.701] May 23 03:44:38.576: INFO: Unable to read wheezy_udp@dns-test-service.dns-6409.svc from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
... skipping 5 lines ...
I0523 04:09:33.703] May 23 03:44:38.603: INFO: Unable to read jessie_udp@dns-test-service.dns-6409 from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.703] May 23 03:44:38.606: INFO: Unable to read jessie_tcp@dns-test-service.dns-6409 from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.703] May 23 03:44:38.609: INFO: Unable to read jessie_udp@dns-test-service.dns-6409.svc from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.704] May 23 03:44:38.611: INFO: Unable to read jessie_tcp@dns-test-service.dns-6409.svc from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.704] May 23 03:44:38.613: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6409.svc from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.704] May 23 03:44:38.615: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6409.svc from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.705] May 23 03:44:38.628: INFO: Lookups using dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6409 wheezy_tcp@dns-test-service.dns-6409 wheezy_udp@dns-test-service.dns-6409.svc wheezy_tcp@dns-test-service.dns-6409.svc wheezy_udp@_http._tcp.dns-test-service.dns-6409.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6409.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6409 jessie_tcp@dns-test-service.dns-6409 jessie_udp@dns-test-service.dns-6409.svc jessie_tcp@dns-test-service.dns-6409.svc jessie_udp@_http._tcp.dns-test-service.dns-6409.svc jessie_tcp@_http._tcp.dns-test-service.dns-6409.svc]
I0523 04:09:33.705] 
I0523 04:09:33.705] May 23 03:44:43.566: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.706] May 23 03:44:43.569: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.706] May 23 03:44:43.571: INFO: Unable to read wheezy_udp@dns-test-service.dns-6409 from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.706] May 23 03:44:43.573: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6409 from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.707] May 23 03:44:43.575: INFO: Unable to read wheezy_udp@dns-test-service.dns-6409.svc from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
... skipping 5 lines ...
I0523 04:09:33.709] May 23 03:44:43.600: INFO: Unable to read jessie_udp@dns-test-service.dns-6409 from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.709] May 23 03:44:43.602: INFO: Unable to read jessie_tcp@dns-test-service.dns-6409 from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.709] May 23 03:44:43.604: INFO: Unable to read jessie_udp@dns-test-service.dns-6409.svc from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.710] May 23 03:44:43.606: INFO: Unable to read jessie_tcp@dns-test-service.dns-6409.svc from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.710] May 23 03:44:43.608: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6409.svc from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.710] May 23 03:44:43.610: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6409.svc from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.711] May 23 03:44:43.622: INFO: Lookups using dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6409 wheezy_tcp@dns-test-service.dns-6409 wheezy_udp@dns-test-service.dns-6409.svc wheezy_tcp@dns-test-service.dns-6409.svc wheezy_udp@_http._tcp.dns-test-service.dns-6409.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6409.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6409 jessie_tcp@dns-test-service.dns-6409 jessie_udp@dns-test-service.dns-6409.svc jessie_tcp@dns-test-service.dns-6409.svc jessie_udp@_http._tcp.dns-test-service.dns-6409.svc jessie_tcp@_http._tcp.dns-test-service.dns-6409.svc]
I0523 04:09:33.711] 
I0523 04:09:33.711] May 23 03:44:48.566: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.711] May 23 03:44:48.569: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.712] May 23 03:44:48.572: INFO: Unable to read wheezy_udp@dns-test-service.dns-6409 from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.712] May 23 03:44:48.575: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6409 from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.712] May 23 03:44:48.577: INFO: Unable to read wheezy_udp@dns-test-service.dns-6409.svc from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
... skipping 5 lines ...
I0523 04:09:33.714] May 23 03:44:48.604: INFO: Unable to read jessie_udp@dns-test-service.dns-6409 from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.714] May 23 03:44:48.607: INFO: Unable to read jessie_tcp@dns-test-service.dns-6409 from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.715] May 23 03:44:48.609: INFO: Unable to read jessie_udp@dns-test-service.dns-6409.svc from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.715] May 23 03:44:48.611: INFO: Unable to read jessie_tcp@dns-test-service.dns-6409.svc from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.715] May 23 03:44:48.613: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6409.svc from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.716] May 23 03:44:48.615: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6409.svc from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.716] May 23 03:44:48.628: INFO: Lookups using dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6409 wheezy_tcp@dns-test-service.dns-6409 wheezy_udp@dns-test-service.dns-6409.svc wheezy_tcp@dns-test-service.dns-6409.svc wheezy_udp@_http._tcp.dns-test-service.dns-6409.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6409.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6409 jessie_tcp@dns-test-service.dns-6409 jessie_udp@dns-test-service.dns-6409.svc jessie_tcp@dns-test-service.dns-6409.svc jessie_udp@_http._tcp.dns-test-service.dns-6409.svc jessie_tcp@_http._tcp.dns-test-service.dns-6409.svc]
I0523 04:09:33.716] 
I0523 04:09:33.717] May 23 03:44:53.565: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.717] May 23 03:44:53.568: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.717] May 23 03:44:53.571: INFO: Unable to read wheezy_udp@dns-test-service.dns-6409 from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.717] May 23 03:44:53.573: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6409 from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.718] May 23 03:44:53.575: INFO: Unable to read wheezy_udp@dns-test-service.dns-6409.svc from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
... skipping 5 lines ...
I0523 04:09:33.720] May 23 03:44:53.602: INFO: Unable to read jessie_udp@dns-test-service.dns-6409 from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.720] May 23 03:44:53.604: INFO: Unable to read jessie_tcp@dns-test-service.dns-6409 from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.720] May 23 03:44:53.607: INFO: Unable to read jessie_udp@dns-test-service.dns-6409.svc from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.720] May 23 03:44:53.609: INFO: Unable to read jessie_tcp@dns-test-service.dns-6409.svc from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.721] May 23 03:44:53.611: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6409.svc from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.721] May 23 03:44:53.614: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6409.svc from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.722] May 23 03:44:53.626: INFO: Lookups using dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6409 wheezy_tcp@dns-test-service.dns-6409 wheezy_udp@dns-test-service.dns-6409.svc wheezy_tcp@dns-test-service.dns-6409.svc wheezy_udp@_http._tcp.dns-test-service.dns-6409.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6409.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6409 jessie_tcp@dns-test-service.dns-6409 jessie_udp@dns-test-service.dns-6409.svc jessie_tcp@dns-test-service.dns-6409.svc jessie_udp@_http._tcp.dns-test-service.dns-6409.svc jessie_tcp@_http._tcp.dns-test-service.dns-6409.svc]
I0523 04:09:33.722] 
I0523 04:09:33.723] May 23 03:44:58.600: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.723] May 23 03:44:58.602: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.723] May 23 03:44:58.604: INFO: Unable to read jessie_udp@dns-test-service.dns-6409 from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.723] May 23 03:44:58.606: INFO: Unable to read jessie_tcp@dns-test-service.dns-6409 from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.724] May 23 03:44:58.613: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6409.svc from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.724] May 23 03:44:58.615: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6409.svc from pod dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7: the server could not find the requested resource (get pods dns-test-4af9bb59-1965-4467-9424-355e3a406da7)
I0523 04:09:33.724] May 23 03:44:58.628: INFO: Lookups using dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7 failed for: [jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6409 jessie_tcp@dns-test-service.dns-6409 jessie_udp@_http._tcp.dns-test-service.dns-6409.svc jessie_tcp@_http._tcp.dns-test-service.dns-6409.svc]
I0523 04:09:33.724] 
I0523 04:09:33.725] May 23 03:45:03.622: INFO: DNS probes using dns-6409/dns-test-4af9bb59-1965-4467-9424-355e3a406da7 succeeded
I0523 04:09:33.725] 
I0523 04:09:33.725] STEP: deleting the pod
I0523 04:09:33.725] STEP: deleting the test service
I0523 04:09:33.725] STEP: deleting the test headless service
... skipping 5 lines ...
I0523 04:09:33.726] • [SLOW TEST:37.343 seconds]
I0523 04:09:33.726] [sig-network] DNS
I0523 04:09:33.726] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
I0523 04:09:33.726]   should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
I0523 04:09:33.726]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.726] ------------------------------
I0523 04:09:33.727] {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":292,"completed":185,"skipped":2772,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.727] SSSSSSSSSSSSSSSSSSS
I0523 04:09:33.727] ------------------------------
I0523 04:09:33.727] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
I0523 04:09:33.727]   patching/updating a mutating webhook should work [Conformance]
I0523 04:09:33.727]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.727] [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 38 lines ...
I0523 04:09:33.734] • [SLOW TEST:5.707 seconds]
I0523 04:09:33.734] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0523 04:09:33.734] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:33.734]   patching/updating a mutating webhook should work [Conformance]
I0523 04:09:33.735]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.735] ------------------------------
I0523 04:09:33.735] {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":292,"completed":186,"skipped":2791,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.735] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:33.735] ------------------------------
I0523 04:09:33.735] [sig-cli] Kubectl client Kubectl expose 
I0523 04:09:33.735]   should create services for rc  [Conformance]
I0523 04:09:33.735]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.736] [BeforeEach] [sig-cli] Kubectl client
... skipping 50 lines ...
I0523 04:09:33.743] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
I0523 04:09:33.743]   Kubectl expose
I0523 04:09:33.743]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1229
I0523 04:09:33.744]     should create services for rc  [Conformance]
I0523 04:09:33.744]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.744] ------------------------------
I0523 04:09:33.744] {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":292,"completed":187,"skipped":2831,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.744] [sig-auth] ServiceAccounts 
I0523 04:09:33.744]   should run through the lifecycle of a ServiceAccount [Conformance]
I0523 04:09:33.745]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.745] [BeforeEach] [sig-auth] ServiceAccounts
I0523 04:09:33.745]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
I0523 04:09:33.745] STEP: Creating a kubernetes client
... skipping 15 lines ...
I0523 04:09:33.748] STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector)
I0523 04:09:33.748] STEP: deleting the ServiceAccount
I0523 04:09:33.748] [AfterEach] [sig-auth] ServiceAccounts
I0523 04:09:33.748]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.749] May 23 03:45:16.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.749] STEP: Destroying namespace "svcaccounts-3466" for this suite.
I0523 04:09:33.749] •{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":292,"completed":188,"skipped":2831,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.749] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:33.749] ------------------------------
I0523 04:09:33.750] [sig-network] DNS 
I0523 04:09:33.750]   should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
I0523 04:09:33.750]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.750] [BeforeEach] [sig-network] DNS
... skipping 23 lines ...
I0523 04:09:33.756] 
I0523 04:09:33.756] STEP: deleting the pod
I0523 04:09:33.756] [AfterEach] [sig-network] DNS
I0523 04:09:33.757]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.757] May 23 03:45:20.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.757] STEP: Destroying namespace "dns-8496" for this suite.
I0523 04:09:33.757] •{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":292,"completed":189,"skipped":2878,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.757] SSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:33.758] ------------------------------
I0523 04:09:33.758] [sig-storage] EmptyDir wrapper volumes 
I0523 04:09:33.758]   should not conflict [Conformance]
I0523 04:09:33.758]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.758] [BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 15 lines ...
I0523 04:09:33.761] STEP: Cleaning up the configmap
I0523 04:09:33.761] STEP: Cleaning up the pod
I0523 04:09:33.761] [AfterEach] [sig-storage] EmptyDir wrapper volumes
I0523 04:09:33.761]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.762] May 23 03:45:22.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.762] STEP: Destroying namespace "emptydir-wrapper-5249" for this suite.
I0523 04:09:33.762] •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":292,"completed":190,"skipped":2900,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.762] SSSSSS
I0523 04:09:33.762] ------------------------------
I0523 04:09:33.762] [sig-auth] ServiceAccounts 
I0523 04:09:33.762]   should allow opting out of API token automount  [Conformance]
I0523 04:09:33.762]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.763] [BeforeEach] [sig-auth] ServiceAccounts
... skipping 31 lines ...
I0523 04:09:33.767] May 23 03:45:23.406: INFO: created pod pod-service-account-nomountsa-nomountspec
I0523 04:09:33.768] May 23 03:45:23.406: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
I0523 04:09:33.768] [AfterEach] [sig-auth] ServiceAccounts
I0523 04:09:33.768]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.768] May 23 03:45:23.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.768] STEP: Destroying namespace "svcaccounts-6372" for this suite.
I0523 04:09:33.769] •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":292,"completed":191,"skipped":2906,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.769] SSSSSSSSSSS
I0523 04:09:33.769] ------------------------------
I0523 04:09:33.769] [sig-cli] Kubectl client Update Demo 
I0523 04:09:33.769]   should scale a replication controller  [Conformance]
I0523 04:09:33.769]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.769] [BeforeEach] [sig-cli] Kubectl client
... skipping 150 lines ...
I0523 04:09:33.795] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
I0523 04:09:33.795]   Update Demo
I0523 04:09:33.795]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301
I0523 04:09:33.795]     should scale a replication controller  [Conformance]
I0523 04:09:33.796]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.796] ------------------------------
I0523 04:09:33.796] {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":292,"completed":192,"skipped":2917,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.796] SSSS
I0523 04:09:33.796] ------------------------------
I0523 04:09:33.796] [sig-storage] Projected secret 
I0523 04:09:33.796]   should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:33.797]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.797] [BeforeEach] [sig-storage] Projected secret
... skipping 10 lines ...
I0523 04:09:33.799] I0523 03:45:44.101829      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.799] [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:33.799]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.799] I0523 03:45:44.103919      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.800] STEP: Creating projection with secret that has name projected-secret-test-8e54bb34-3153-4f87-a83d-1159da7f0429
I0523 04:09:33.800] STEP: Creating a pod to test consume secrets
I0523 04:09:33.800] May 23 03:45:44.111: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fce5cd66-5d8e-42de-9151-342b789e5190" in namespace "projected-6290" to be "Succeeded or Failed"
I0523 04:09:33.800] May 23 03:45:44.113: INFO: Pod "pod-projected-secrets-fce5cd66-5d8e-42de-9151-342b789e5190": Phase="Pending", Reason="", readiness=false. Elapsed: 1.925466ms
I0523 04:09:33.800] May 23 03:45:46.116: INFO: Pod "pod-projected-secrets-fce5cd66-5d8e-42de-9151-342b789e5190": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005009291s
I0523 04:09:33.801] STEP: Saw pod success
I0523 04:09:33.801] May 23 03:45:46.116: INFO: Pod "pod-projected-secrets-fce5cd66-5d8e-42de-9151-342b789e5190" satisfied condition "Succeeded or Failed"
I0523 04:09:33.801] May 23 03:45:46.120: INFO: Trying to get logs from node kind-worker2 pod pod-projected-secrets-fce5cd66-5d8e-42de-9151-342b789e5190 container projected-secret-volume-test: <nil>
I0523 04:09:33.801] STEP: delete the pod
I0523 04:09:33.801] May 23 03:45:46.144: INFO: Waiting for pod pod-projected-secrets-fce5cd66-5d8e-42de-9151-342b789e5190 to disappear
I0523 04:09:33.801] May 23 03:45:46.150: INFO: Pod pod-projected-secrets-fce5cd66-5d8e-42de-9151-342b789e5190 no longer exists
I0523 04:09:33.801] [AfterEach] [sig-storage] Projected secret
I0523 04:09:33.801]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.802] May 23 03:45:46.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.802] STEP: Destroying namespace "projected-6290" for this suite.
I0523 04:09:33.802] •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":193,"skipped":2921,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.802] SSSSSSSS
I0523 04:09:33.802] ------------------------------
I0523 04:09:33.802] [sig-storage] Downward API volume 
I0523 04:09:33.802]   should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:33.803]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.803] [BeforeEach] [sig-storage] Downward API volume
... skipping 11 lines ...
I0523 04:09:33.805] [BeforeEach] [sig-storage] Downward API volume
I0523 04:09:33.805]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
I0523 04:09:33.805] [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:33.805]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.806] I0523 03:45:46.284573      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.806] STEP: Creating a pod to test downward API volume plugin
I0523 04:09:33.806] May 23 03:45:46.289: INFO: Waiting up to 5m0s for pod "downwardapi-volume-308a90bd-f03f-403d-8647-38b1cc95d61d" in namespace "downward-api-9707" to be "Succeeded or Failed"
I0523 04:09:33.806] May 23 03:45:46.291: INFO: Pod "downwardapi-volume-308a90bd-f03f-403d-8647-38b1cc95d61d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044607ms
I0523 04:09:33.806] May 23 03:45:48.295: INFO: Pod "downwardapi-volume-308a90bd-f03f-403d-8647-38b1cc95d61d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005117629s
I0523 04:09:33.807] STEP: Saw pod success
I0523 04:09:33.807] May 23 03:45:48.295: INFO: Pod "downwardapi-volume-308a90bd-f03f-403d-8647-38b1cc95d61d" satisfied condition "Succeeded or Failed"
I0523 04:09:33.807] May 23 03:45:48.297: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-308a90bd-f03f-403d-8647-38b1cc95d61d container client-container: <nil>
I0523 04:09:33.807] STEP: delete the pod
I0523 04:09:33.807] May 23 03:45:48.309: INFO: Waiting for pod downwardapi-volume-308a90bd-f03f-403d-8647-38b1cc95d61d to disappear
I0523 04:09:33.807] May 23 03:45:48.311: INFO: Pod downwardapi-volume-308a90bd-f03f-403d-8647-38b1cc95d61d no longer exists
I0523 04:09:33.808] [AfterEach] [sig-storage] Downward API volume
I0523 04:09:33.808]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.808] May 23 03:45:48.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.808] STEP: Destroying namespace "downward-api-9707" for this suite.
I0523 04:09:33.808] •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":194,"skipped":2929,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.809] SSSSSSS
I0523 04:09:33.809] ------------------------------
I0523 04:09:33.809] [sig-storage] Downward API volume 
I0523 04:09:33.809]   should update labels on modification [NodeConformance] [Conformance]
I0523 04:09:33.809]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.809] [BeforeEach] [sig-storage] Downward API volume
... skipping 16 lines ...
I0523 04:09:33.812] STEP: Creating the pod
I0523 04:09:33.812] May 23 03:45:50.976: INFO: Successfully updated pod "labelsupdate5936d8f6-bcc1-4351-a2b8-526854b21afd"
I0523 04:09:33.813] [AfterEach] [sig-storage] Downward API volume
I0523 04:09:33.813]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.813] May 23 03:45:52.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.813] STEP: Destroying namespace "downward-api-3198" for this suite.
I0523 04:09:33.813] •{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":292,"completed":195,"skipped":2936,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.813] SSSSSSSSSSS
I0523 04:09:33.814] ------------------------------
I0523 04:09:33.814] [sig-network] Services 
I0523 04:09:33.814]   should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]
I0523 04:09:33.814]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.814] [BeforeEach] [sig-network] Services
... skipping 96 lines ...
I0523 04:09:33.836] • [SLOW TEST:53.404 seconds]
I0523 04:09:33.836] [sig-network] Services
I0523 04:09:33.836] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
I0523 04:09:33.836]   should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]
I0523 04:09:33.836]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.837] ------------------------------
I0523 04:09:33.837] {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":292,"completed":196,"skipped":2947,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.837] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:33.837] ------------------------------
I0523 04:09:33.837] [sig-api-machinery] Watchers 
I0523 04:09:33.838]   should receive events on concurrent watches in same order [Conformance]
I0523 04:09:33.838]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.838] [BeforeEach] [sig-api-machinery] Watchers
... skipping 21 lines ...
I0523 04:09:33.843] • [SLOW TEST:5.556 seconds]
I0523 04:09:33.843] [sig-api-machinery] Watchers
I0523 04:09:33.843] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:33.843]   should receive events on concurrent watches in same order [Conformance]
I0523 04:09:33.844]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.844] ------------------------------
I0523 04:09:33.844] {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":292,"completed":197,"skipped":3005,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.844] SS
I0523 04:09:33.845] ------------------------------
I0523 04:09:33.845] [sig-storage] Secrets 
I0523 04:09:33.845]   should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:33.845]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.845] [BeforeEach] [sig-storage] Secrets
... skipping 10 lines ...
I0523 04:09:33.848] I0523 03:46:52.080511      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.848] [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:33.848]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.848] I0523 03:46:52.082806      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.849] STEP: Creating secret with name secret-test-map-dded53b3-41ac-48b1-9db2-c0ead74af98e
I0523 04:09:33.849] STEP: Creating a pod to test consume secrets
I0523 04:09:33.849] May 23 03:46:52.090: INFO: Waiting up to 5m0s for pod "pod-secrets-5448abd7-8a6e-475b-8ab1-fec1ccef1c61" in namespace "secrets-9159" to be "Succeeded or Failed"
I0523 04:09:33.849] May 23 03:46:52.092: INFO: Pod "pod-secrets-5448abd7-8a6e-475b-8ab1-fec1ccef1c61": Phase="Pending", Reason="", readiness=false. Elapsed: 1.872251ms
I0523 04:09:33.850] May 23 03:46:54.095: INFO: Pod "pod-secrets-5448abd7-8a6e-475b-8ab1-fec1ccef1c61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004869152s
I0523 04:09:33.850] STEP: Saw pod success
I0523 04:09:33.850] May 23 03:46:54.095: INFO: Pod "pod-secrets-5448abd7-8a6e-475b-8ab1-fec1ccef1c61" satisfied condition "Succeeded or Failed"
I0523 04:09:33.850] May 23 03:46:54.097: INFO: Trying to get logs from node kind-worker pod pod-secrets-5448abd7-8a6e-475b-8ab1-fec1ccef1c61 container secret-volume-test: <nil>
I0523 04:09:33.851] STEP: delete the pod
I0523 04:09:33.851] May 23 03:46:54.115: INFO: Waiting for pod pod-secrets-5448abd7-8a6e-475b-8ab1-fec1ccef1c61 to disappear
I0523 04:09:33.851] May 23 03:46:54.117: INFO: Pod pod-secrets-5448abd7-8a6e-475b-8ab1-fec1ccef1c61 no longer exists
I0523 04:09:33.851] [AfterEach] [sig-storage] Secrets
I0523 04:09:33.851]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.852] May 23 03:46:54.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.852] STEP: Destroying namespace "secrets-9159" for this suite.
I0523 04:09:33.852] •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":198,"skipped":3007,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.852] SSSS
I0523 04:09:33.853] ------------------------------
I0523 04:09:33.853] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
I0523 04:09:33.853]   getting/updating/patching custom resource definition status sub-resource works  [Conformance]
I0523 04:09:33.853]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.853] [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 13 lines ...
I0523 04:09:33.857] I0523 03:46:54.248794      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.857] May 23 03:46:54.248: INFO: >>> kubeConfig: /tmp/kubeconfig-142995523
I0523 04:09:33.857] [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
I0523 04:09:33.857]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.857] May 23 03:46:54.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.858] STEP: Destroying namespace "custom-resource-definition-3413" for this suite.
I0523 04:09:33.858] •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":292,"completed":199,"skipped":3011,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.858] SSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:33.859] ------------------------------
I0523 04:09:33.859] [k8s.io] InitContainer [NodeConformance] 
I0523 04:09:33.859]   should invoke init containers on a RestartNever pod [Conformance]
I0523 04:09:33.859]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.859] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
... skipping 19 lines ...
I0523 04:09:33.864] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0523 04:09:33.864]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.864] May 23 03:46:58.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.864] I0523 03:46:58.753679      17 retrywatcher.go:147] Stopping RetryWatcher.
I0523 04:09:33.865] I0523 03:46:58.753833      17 retrywatcher.go:275] Stopping RetryWatcher.
I0523 04:09:33.865] STEP: Destroying namespace "init-container-7535" for this suite.
I0523 04:09:33.865] •{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":292,"completed":200,"skipped":3038,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.865] SS
I0523 04:09:33.865] ------------------------------
I0523 04:09:33.865] [sig-storage] Projected downwardAPI 
I0523 04:09:33.866]   should provide container's memory limit [NodeConformance] [Conformance]
I0523 04:09:33.866]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.866] [BeforeEach] [sig-storage] Projected downwardAPI
... skipping 11 lines ...
I0523 04:09:33.869] [BeforeEach] [sig-storage] Projected downwardAPI
I0523 04:09:33.869]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
I0523 04:09:33.869] [It] should provide container's memory limit [NodeConformance] [Conformance]
I0523 04:09:33.870]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.870] I0523 03:46:58.891391      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.870] STEP: Creating a pod to test downward API volume plugin
I0523 04:09:33.870] May 23 03:46:58.897: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d092b326-8963-4237-96ee-d4ee7906ab87" in namespace "projected-2103" to be "Succeeded or Failed"
I0523 04:09:33.871] May 23 03:46:58.899: INFO: Pod "downwardapi-volume-d092b326-8963-4237-96ee-d4ee7906ab87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.003279ms
I0523 04:09:33.871] May 23 03:47:00.902: INFO: Pod "downwardapi-volume-d092b326-8963-4237-96ee-d4ee7906ab87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005371425s
I0523 04:09:33.871] STEP: Saw pod success
I0523 04:09:33.871] May 23 03:47:00.902: INFO: Pod "downwardapi-volume-d092b326-8963-4237-96ee-d4ee7906ab87" satisfied condition "Succeeded or Failed"
I0523 04:09:33.872] May 23 03:47:00.905: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-d092b326-8963-4237-96ee-d4ee7906ab87 container client-container: <nil>
I0523 04:09:33.872] STEP: delete the pod
I0523 04:09:33.872] May 23 03:47:00.917: INFO: Waiting for pod downwardapi-volume-d092b326-8963-4237-96ee-d4ee7906ab87 to disappear
I0523 04:09:33.872] May 23 03:47:00.919: INFO: Pod downwardapi-volume-d092b326-8963-4237-96ee-d4ee7906ab87 no longer exists
I0523 04:09:33.873] [AfterEach] [sig-storage] Projected downwardAPI
I0523 04:09:33.873]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.873] May 23 03:47:00.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.873] STEP: Destroying namespace "projected-2103" for this suite.
I0523 04:09:33.874] •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":292,"completed":201,"skipped":3040,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.874] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:33.874] ------------------------------
I0523 04:09:33.874] [sig-node] ConfigMap 
I0523 04:09:33.874]   should fail to create ConfigMap with empty key [Conformance]
I0523 04:09:33.874]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.875] [BeforeEach] [sig-node] ConfigMap
I0523 04:09:33.875]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
I0523 04:09:33.875] STEP: Creating a kubernetes client
I0523 04:09:33.875] May 23 03:47:00.927: INFO: >>> kubeConfig: /tmp/kubeconfig-142995523
I0523 04:09:33.875] STEP: Building a namespace api object, basename configmap
I0523 04:09:33.875] I0523 03:47:00.932519      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.876] I0523 03:47:00.932548      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.876] STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-8136
I0523 04:09:33.876] I0523 03:47:00.948492      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.876] STEP: Waiting for a default service account to be provisioned in namespace
I0523 04:09:33.877] I0523 03:47:01.053654      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.877] I0523 03:47:01.053682      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.877] [It] should fail to create ConfigMap with empty key [Conformance]
I0523 04:09:33.877]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.877] I0523 03:47:01.056059      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.878] STEP: Creating configMap that has name configmap-test-emptyKey-9dd46471-e2af-44bc-b261-7bae39e84a3d
I0523 04:09:33.878] [AfterEach] [sig-node] ConfigMap
I0523 04:09:33.878]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.878] May 23 03:47:01.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.878] STEP: Destroying namespace "configmap-8136" for this suite.
I0523 04:09:33.878] •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":292,"completed":202,"skipped":3079,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.879] SSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:33.879] ------------------------------
I0523 04:09:33.879] [sig-storage] EmptyDir volumes 
I0523 04:09:33.879]   should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:33.879]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.879] [BeforeEach] [sig-storage] EmptyDir volumes
... skipping 9 lines ...
I0523 04:09:33.881] I0523 03:47:01.185328      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.882] I0523 03:47:01.185354      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.882] [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:33.882]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.882] I0523 03:47:01.187788      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.882] STEP: Creating a pod to test emptydir 0777 on tmpfs
I0523 04:09:33.882] May 23 03:47:01.192: INFO: Waiting up to 5m0s for pod "pod-14aa603d-01d5-457a-b040-2bab1de0cd76" in namespace "emptydir-5461" to be "Succeeded or Failed"
I0523 04:09:33.883] May 23 03:47:01.196: INFO: Pod "pod-14aa603d-01d5-457a-b040-2bab1de0cd76": Phase="Pending", Reason="", readiness=false. Elapsed: 3.425733ms
I0523 04:09:33.883] May 23 03:47:03.199: INFO: Pod "pod-14aa603d-01d5-457a-b040-2bab1de0cd76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006467048s
I0523 04:09:33.883] STEP: Saw pod success
I0523 04:09:33.883] May 23 03:47:03.199: INFO: Pod "pod-14aa603d-01d5-457a-b040-2bab1de0cd76" satisfied condition "Succeeded or Failed"
I0523 04:09:33.883] May 23 03:47:03.201: INFO: Trying to get logs from node kind-worker pod pod-14aa603d-01d5-457a-b040-2bab1de0cd76 container test-container: <nil>
I0523 04:09:33.884] STEP: delete the pod
I0523 04:09:33.884] May 23 03:47:03.211: INFO: Waiting for pod pod-14aa603d-01d5-457a-b040-2bab1de0cd76 to disappear
I0523 04:09:33.884] May 23 03:47:03.213: INFO: Pod pod-14aa603d-01d5-457a-b040-2bab1de0cd76 no longer exists
I0523 04:09:33.884] [AfterEach] [sig-storage] EmptyDir volumes
I0523 04:09:33.884]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.884] May 23 03:47:03.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.885] STEP: Destroying namespace "emptydir-5461" for this suite.
I0523 04:09:33.885] •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":203,"skipped":3104,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.885] SSSS
I0523 04:09:33.885] ------------------------------
I0523 04:09:33.885] [sig-api-machinery] ResourceQuota 
I0523 04:09:33.885]   should verify ResourceQuota with terminating scopes. [Conformance]
I0523 04:09:33.886]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.886] [BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 33 lines ...
I0523 04:09:33.891] • [SLOW TEST:16.194 seconds]
I0523 04:09:33.891] [sig-api-machinery] ResourceQuota
I0523 04:09:33.892] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:33.892]   should verify ResourceQuota with terminating scopes. [Conformance]
I0523 04:09:33.892]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.892] ------------------------------
I0523 04:09:33.892] {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":292,"completed":204,"skipped":3108,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.893] S
I0523 04:09:33.893] ------------------------------
I0523 04:09:33.893] [sig-cli] Kubectl client Proxy server 
I0523 04:09:33.893]   should support --unix-socket=/path  [Conformance]
I0523 04:09:33.893]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.893] [BeforeEach] [sig-cli] Kubectl client
... skipping 17 lines ...
I0523 04:09:33.897] May 23 03:47:19.544: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-142995523 proxy --unix-socket=/tmp/kubectl-proxy-unix718487046/test'
I0523 04:09:33.897] STEP: retrieving proxy /api/ output
I0523 04:09:33.897] [AfterEach] [sig-cli] Kubectl client
I0523 04:09:33.897]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.898] May 23 03:47:19.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.898] STEP: Destroying namespace "kubectl-4337" for this suite.
I0523 04:09:33.898] •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":292,"completed":205,"skipped":3109,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.898] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:33.898] ------------------------------
I0523 04:09:33.899] [k8s.io] Docker Containers 
I0523 04:09:33.899]   should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
I0523 04:09:33.899]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.899] [BeforeEach] [k8s.io] Docker Containers
... skipping 9 lines ...
I0523 04:09:33.901] I0523 03:47:19.742166      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.902] I0523 03:47:19.742197      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.902] [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
I0523 04:09:33.902]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.902] I0523 03:47:19.744244      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.902] STEP: Creating a pod to test override arguments
I0523 04:09:33.903] May 23 03:47:19.749: INFO: Waiting up to 5m0s for pod "client-containers-fa1fb346-d6ce-46df-a70d-4509cb40e63b" in namespace "containers-1073" to be "Succeeded or Failed"
I0523 04:09:33.903] May 23 03:47:19.751: INFO: Pod "client-containers-fa1fb346-d6ce-46df-a70d-4509cb40e63b": Phase="Pending", Reason="", readiness=false. Elapsed: 1.681919ms
I0523 04:09:33.903] May 23 03:47:21.754: INFO: Pod "client-containers-fa1fb346-d6ce-46df-a70d-4509cb40e63b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004800983s
I0523 04:09:33.903] May 23 03:47:23.757: INFO: Pod "client-containers-fa1fb346-d6ce-46df-a70d-4509cb40e63b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007752732s
I0523 04:09:33.903] STEP: Saw pod success
I0523 04:09:33.904] May 23 03:47:23.757: INFO: Pod "client-containers-fa1fb346-d6ce-46df-a70d-4509cb40e63b" satisfied condition "Succeeded or Failed"
I0523 04:09:33.904] May 23 03:47:23.759: INFO: Trying to get logs from node kind-worker pod client-containers-fa1fb346-d6ce-46df-a70d-4509cb40e63b container test-container: <nil>
I0523 04:09:33.904] STEP: delete the pod
I0523 04:09:33.904] May 23 03:47:23.772: INFO: Waiting for pod client-containers-fa1fb346-d6ce-46df-a70d-4509cb40e63b to disappear
I0523 04:09:33.904] May 23 03:47:23.774: INFO: Pod client-containers-fa1fb346-d6ce-46df-a70d-4509cb40e63b no longer exists
I0523 04:09:33.905] [AfterEach] [k8s.io] Docker Containers
I0523 04:09:33.905]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.905] May 23 03:47:23.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.905] STEP: Destroying namespace "containers-1073" for this suite.
I0523 04:09:33.905] •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":292,"completed":206,"skipped":3145,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.905] SSSSSSSSSSSSSSSSSS
I0523 04:09:33.906] ------------------------------
I0523 04:09:33.906] [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
I0523 04:09:33.906]   should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
I0523 04:09:33.906]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.906] [BeforeEach] [k8s.io] Security Context
... skipping 10 lines ...
I0523 04:09:33.909] I0523 03:47:23.902612      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.909] [BeforeEach] [k8s.io] Security Context
I0523 04:09:33.909]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
I0523 04:09:33.909] I0523 03:47:23.905022      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.910] [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
I0523 04:09:33.910]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.910] May 23 03:47:23.910: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-2246b184-c358-48cf-89d4-0aeba487cbc9" in namespace "security-context-test-8908" to be "Succeeded or Failed"
I0523 04:09:33.910] May 23 03:47:23.912: INFO: Pod "busybox-readonly-false-2246b184-c358-48cf-89d4-0aeba487cbc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.350903ms
I0523 04:09:33.910] May 23 03:47:25.915: INFO: Pod "busybox-readonly-false-2246b184-c358-48cf-89d4-0aeba487cbc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005409703s
I0523 04:09:33.911] May 23 03:47:25.915: INFO: Pod "busybox-readonly-false-2246b184-c358-48cf-89d4-0aeba487cbc9" satisfied condition "Succeeded or Failed"
I0523 04:09:33.911] [AfterEach] [k8s.io] Security Context
I0523 04:09:33.911]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.911] May 23 03:47:25.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.911] STEP: Destroying namespace "security-context-test-8908" for this suite.
I0523 04:09:33.912] •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":292,"completed":207,"skipped":3163,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.912] SSSSSSSSSSSSSSSSSSS
I0523 04:09:33.912] ------------------------------
I0523 04:09:33.912] [sig-apps] Daemon set [Serial] 
I0523 04:09:33.913]   should run and stop complex daemon [Conformance]
I0523 04:09:33.913]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.913] [BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 76 lines ...
I0523 04:09:33.926] • [SLOW TEST:20.767 seconds]
I0523 04:09:33.926] [sig-apps] Daemon set [Serial]
I0523 04:09:33.926] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0523 04:09:33.927]   should run and stop complex daemon [Conformance]
I0523 04:09:33.927]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.927] ------------------------------
I0523 04:09:33.927] {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":292,"completed":208,"skipped":3182,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.927] SSSSSSSSSSSSS
I0523 04:09:33.927] ------------------------------
I0523 04:09:33.928] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
I0523 04:09:33.928]   should be able to deny pod and configmap creation [Conformance]
I0523 04:09:33.928]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.928] [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 41 lines ...
I0523 04:09:33.936] • [SLOW TEST:14.024 seconds]
I0523 04:09:33.936] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0523 04:09:33.936] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:33.936]   should be able to deny pod and configmap creation [Conformance]
I0523 04:09:33.936]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.936] ------------------------------
I0523 04:09:33.937] {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":292,"completed":209,"skipped":3195,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.937] SSSSSSSS
I0523 04:09:33.937] ------------------------------
I0523 04:09:33.937] [k8s.io] Pods 
I0523 04:09:33.937]   should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
I0523 04:09:33.938]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.938] [BeforeEach] [k8s.io] Pods
... skipping 17 lines ...
I0523 04:09:33.942] STEP: creating the pod
I0523 04:09:33.942] STEP: submitting the pod to kubernetes
I0523 04:09:33.942] [AfterEach] [k8s.io] Pods
I0523 04:09:33.942]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.942] May 23 03:48:04.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.942] STEP: Destroying namespace "pods-6067" for this suite.
I0523 04:09:33.943] •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":292,"completed":210,"skipped":3203,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.943] SS
I0523 04:09:33.943] ------------------------------
I0523 04:09:33.943] [sig-api-machinery] Secrets 
I0523 04:09:33.943]   should fail to create secret due to empty secret key [Conformance]
I0523 04:09:33.943]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.944] [BeforeEach] [sig-api-machinery] Secrets
I0523 04:09:33.944]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
I0523 04:09:33.944] STEP: Creating a kubernetes client
I0523 04:09:33.944] May 23 03:48:04.870: INFO: >>> kubeConfig: /tmp/kubeconfig-142995523
I0523 04:09:33.944] STEP: Building a namespace api object, basename secrets
I0523 04:09:33.945] I0523 03:48:04.874519      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.945] I0523 03:48:04.874547      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.945] I0523 03:48:04.887553      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.945] STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-6920
I0523 04:09:33.945] STEP: Waiting for a default service account to be provisioned in namespace
I0523 04:09:33.946] I0523 03:48:04.992616      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.946] I0523 03:48:04.992638      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.946] [It] should fail to create secret due to empty secret key [Conformance]
I0523 04:09:33.946]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.947] I0523 03:48:04.995000      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.947] STEP: Creating projection with secret that has name secret-emptykey-test-0c8a66bf-677e-4818-9cbe-124d61b8cb88
I0523 04:09:33.947] [AfterEach] [sig-api-machinery] Secrets
I0523 04:09:33.947]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.947] May 23 03:48:04.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.948] STEP: Destroying namespace "secrets-6920" for this suite.
I0523 04:09:33.948] •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":292,"completed":211,"skipped":3205,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.948] SSSSSSS
I0523 04:09:33.948] ------------------------------
I0523 04:09:33.948] [k8s.io] Container Runtime blackbox test when starting a container that exits 
I0523 04:09:33.949]   should run with the expected status [NodeConformance] [Conformance]
I0523 04:09:33.949]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.949] [BeforeEach] [k8s.io] Container Runtime
... skipping 38 lines ...
I0523 04:09:33.957]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41
I0523 04:09:33.957]     when starting a container that exits
I0523 04:09:33.957]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42
I0523 04:09:33.957]       should run with the expected status [NodeConformance] [Conformance]
I0523 04:09:33.957]       /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.957] ------------------------------
I0523 04:09:33.958] {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":292,"completed":212,"skipped":3212,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.958] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:33.958] ------------------------------
I0523 04:09:33.958] [sig-storage] Subpath Atomic writer volumes 
I0523 04:09:33.958]   should support subpaths with configmap pod [LinuxOnly] [Conformance]
I0523 04:09:33.959]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.959] [BeforeEach] [sig-storage] Subpath
... skipping 13 lines ...
I0523 04:09:33.962] I0523 03:48:27.401136      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.962] STEP: Setting up data
I0523 04:09:33.962] [It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
I0523 04:09:33.962]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.962] STEP: Creating pod pod-subpath-test-configmap-q22m
I0523 04:09:33.963] STEP: Creating a pod to test atomic-volume-subpath
I0523 04:09:33.963] May 23 03:48:27.411: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-q22m" in namespace "subpath-9833" to be "Succeeded or Failed"
I0523 04:09:33.963] May 23 03:48:27.415: INFO: Pod "pod-subpath-test-configmap-q22m": Phase="Pending", Reason="", readiness=false. Elapsed: 3.845557ms
I0523 04:09:33.963] May 23 03:48:29.418: INFO: Pod "pod-subpath-test-configmap-q22m": Phase="Running", Reason="", readiness=true. Elapsed: 2.007016957s
I0523 04:09:33.963] May 23 03:48:31.422: INFO: Pod "pod-subpath-test-configmap-q22m": Phase="Running", Reason="", readiness=true. Elapsed: 4.010671666s
I0523 04:09:33.964] May 23 03:48:33.425: INFO: Pod "pod-subpath-test-configmap-q22m": Phase="Running", Reason="", readiness=true. Elapsed: 6.014181196s
I0523 04:09:33.964] May 23 03:48:35.429: INFO: Pod "pod-subpath-test-configmap-q22m": Phase="Running", Reason="", readiness=true. Elapsed: 8.017533212s
I0523 04:09:33.964] May 23 03:48:37.432: INFO: Pod "pod-subpath-test-configmap-q22m": Phase="Running", Reason="", readiness=true. Elapsed: 10.020923042s
I0523 04:09:33.964] May 23 03:48:39.435: INFO: Pod "pod-subpath-test-configmap-q22m": Phase="Running", Reason="", readiness=true. Elapsed: 12.024223288s
I0523 04:09:33.964] May 23 03:48:41.439: INFO: Pod "pod-subpath-test-configmap-q22m": Phase="Running", Reason="", readiness=true. Elapsed: 14.028281102s
I0523 04:09:33.964] May 23 03:48:43.443: INFO: Pod "pod-subpath-test-configmap-q22m": Phase="Running", Reason="", readiness=true. Elapsed: 16.031583278s
I0523 04:09:33.965] May 23 03:48:45.446: INFO: Pod "pod-subpath-test-configmap-q22m": Phase="Running", Reason="", readiness=true. Elapsed: 18.034578014s
I0523 04:09:33.965] May 23 03:48:47.449: INFO: Pod "pod-subpath-test-configmap-q22m": Phase="Running", Reason="", readiness=true. Elapsed: 20.037601866s
I0523 04:09:33.965] May 23 03:48:49.452: INFO: Pod "pod-subpath-test-configmap-q22m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.040614189s
I0523 04:09:33.965] STEP: Saw pod success
I0523 04:09:33.965] May 23 03:48:49.452: INFO: Pod "pod-subpath-test-configmap-q22m" satisfied condition "Succeeded or Failed"
I0523 04:09:33.966] May 23 03:48:49.454: INFO: Trying to get logs from node kind-worker pod pod-subpath-test-configmap-q22m container test-container-subpath-configmap-q22m: <nil>
I0523 04:09:33.966] STEP: delete the pod
I0523 04:09:33.966] May 23 03:48:49.466: INFO: Waiting for pod pod-subpath-test-configmap-q22m to disappear
I0523 04:09:33.966] May 23 03:48:49.470: INFO: Pod pod-subpath-test-configmap-q22m no longer exists
I0523 04:09:33.966] STEP: Deleting pod pod-subpath-test-configmap-q22m
I0523 04:09:33.966] May 23 03:48:49.470: INFO: Deleting pod "pod-subpath-test-configmap-q22m" in namespace "subpath-9833"
... skipping 7 lines ...
I0523 04:09:33.967] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
I0523 04:09:33.968]   Atomic writer volumes
I0523 04:09:33.968]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
I0523 04:09:33.968]     should support subpaths with configmap pod [LinuxOnly] [Conformance]
I0523 04:09:33.968]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.968] ------------------------------
I0523 04:09:33.969] {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":292,"completed":213,"skipped":3298,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.969] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:33.969] ------------------------------
I0523 04:09:33.969] [k8s.io] Pods 
I0523 04:09:33.969]   should be updated [NodeConformance] [Conformance]
I0523 04:09:33.969]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.969] [BeforeEach] [k8s.io] Pods
... skipping 21 lines ...
I0523 04:09:33.973] STEP: verifying the updated pod is in kubernetes
I0523 04:09:33.973] May 23 03:48:52.131: INFO: Pod update OK
I0523 04:09:33.974] [AfterEach] [k8s.io] Pods
I0523 04:09:33.974]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.974] May 23 03:48:52.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.974] STEP: Destroying namespace "pods-7835" for this suite.
I0523 04:09:33.974] •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":292,"completed":214,"skipped":3334,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.974] SSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:33.975] ------------------------------
I0523 04:09:33.975] [sig-storage] Subpath Atomic writer volumes 
I0523 04:09:33.975]   should support subpaths with projected pod [LinuxOnly] [Conformance]
I0523 04:09:33.975]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.975] [BeforeEach] [sig-storage] Subpath
... skipping 13 lines ...
I0523 04:09:33.978] STEP: Setting up data
I0523 04:09:33.978] I0523 03:48:52.262804      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.978] [It] should support subpaths with projected pod [LinuxOnly] [Conformance]
I0523 04:09:33.978]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.978] STEP: Creating pod pod-subpath-test-projected-7dsk
I0523 04:09:33.979] STEP: Creating a pod to test atomic-volume-subpath
I0523 04:09:33.979] May 23 03:48:52.272: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-7dsk" in namespace "subpath-9385" to be "Succeeded or Failed"
I0523 04:09:33.979] May 23 03:48:52.274: INFO: Pod "pod-subpath-test-projected-7dsk": Phase="Pending", Reason="", readiness=false. Elapsed: 1.952237ms
I0523 04:09:33.979] May 23 03:48:54.276: INFO: Pod "pod-subpath-test-projected-7dsk": Phase="Running", Reason="", readiness=true. Elapsed: 2.004243633s
I0523 04:09:33.980] May 23 03:48:56.279: INFO: Pod "pod-subpath-test-projected-7dsk": Phase="Running", Reason="", readiness=true. Elapsed: 4.006601156s
I0523 04:09:33.980] May 23 03:48:58.281: INFO: Pod "pod-subpath-test-projected-7dsk": Phase="Running", Reason="", readiness=true. Elapsed: 6.009206048s
I0523 04:09:33.980] May 23 03:49:00.284: INFO: Pod "pod-subpath-test-projected-7dsk": Phase="Running", Reason="", readiness=true. Elapsed: 8.012199008s
I0523 04:09:33.980] May 23 03:49:02.287: INFO: Pod "pod-subpath-test-projected-7dsk": Phase="Running", Reason="", readiness=true. Elapsed: 10.015419894s
I0523 04:09:33.980] May 23 03:49:04.291: INFO: Pod "pod-subpath-test-projected-7dsk": Phase="Running", Reason="", readiness=true. Elapsed: 12.018501131s
I0523 04:09:33.981] May 23 03:49:06.294: INFO: Pod "pod-subpath-test-projected-7dsk": Phase="Running", Reason="", readiness=true. Elapsed: 14.021844558s
I0523 04:09:33.981] May 23 03:49:08.297: INFO: Pod "pod-subpath-test-projected-7dsk": Phase="Running", Reason="", readiness=true. Elapsed: 16.024944455s
I0523 04:09:33.981] May 23 03:49:10.300: INFO: Pod "pod-subpath-test-projected-7dsk": Phase="Running", Reason="", readiness=true. Elapsed: 18.028152322s
I0523 04:09:33.981] May 23 03:49:12.304: INFO: Pod "pod-subpath-test-projected-7dsk": Phase="Running", Reason="", readiness=true. Elapsed: 20.031546459s
I0523 04:09:33.982] May 23 03:49:14.307: INFO: Pod "pod-subpath-test-projected-7dsk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.034671802s
I0523 04:09:33.982] STEP: Saw pod success
I0523 04:09:33.982] May 23 03:49:14.307: INFO: Pod "pod-subpath-test-projected-7dsk" satisfied condition "Succeeded or Failed"
I0523 04:09:33.982] May 23 03:49:14.309: INFO: Trying to get logs from node kind-worker pod pod-subpath-test-projected-7dsk container test-container-subpath-projected-7dsk: <nil>
I0523 04:09:33.982] STEP: delete the pod
I0523 04:09:33.982] May 23 03:49:14.320: INFO: Waiting for pod pod-subpath-test-projected-7dsk to disappear
I0523 04:09:33.982] May 23 03:49:14.322: INFO: Pod pod-subpath-test-projected-7dsk no longer exists
I0523 04:09:33.983] STEP: Deleting pod pod-subpath-test-projected-7dsk
I0523 04:09:33.983] May 23 03:49:14.322: INFO: Deleting pod "pod-subpath-test-projected-7dsk" in namespace "subpath-9385"
... skipping 7 lines ...
I0523 04:09:33.984] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
I0523 04:09:33.984]   Atomic writer volumes
I0523 04:09:33.984]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
I0523 04:09:33.984]     should support subpaths with projected pod [LinuxOnly] [Conformance]
I0523 04:09:33.984]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.985] ------------------------------
I0523 04:09:33.985] {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":292,"completed":215,"skipped":3362,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.985] S
I0523 04:09:33.985] ------------------------------
I0523 04:09:33.985] [k8s.io] Pods 
I0523 04:09:33.985]   should support remote command execution over websockets [NodeConformance] [Conformance]
I0523 04:09:33.985]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.985] [BeforeEach] [k8s.io] Pods
... skipping 17 lines ...
I0523 04:09:33.989] STEP: creating the pod
I0523 04:09:33.989] STEP: submitting the pod to kubernetes
I0523 04:09:33.989] [AfterEach] [k8s.io] Pods
I0523 04:09:33.989]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.989] May 23 03:49:16.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.989] STEP: Destroying namespace "pods-7153" for this suite.
I0523 04:09:33.990] •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":292,"completed":216,"skipped":3363,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.990] SSSSSSSSSSSSSSSSSSSSS
I0523 04:09:33.990] ------------------------------
I0523 04:09:33.990] [sig-api-machinery] Namespaces [Serial] 
I0523 04:09:33.990]   should patch a Namespace [Conformance]
I0523 04:09:33.990]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.991] [BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 20 lines ...
I0523 04:09:33.995] STEP: get the Namespace and ensuring it has the label
I0523 04:09:33.995] [AfterEach] [sig-api-machinery] Namespaces [Serial]
I0523 04:09:33.995]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:33.995] May 23 03:49:16.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:33.996] STEP: Destroying namespace "namespaces-6054" for this suite.
I0523 04:09:33.996] STEP: Destroying namespace "nspatchtest-67f75861-d449-4270-be3d-e1dc03904493-3734" for this suite.
I0523 04:09:33.996] •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":292,"completed":217,"skipped":3384,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:33.996] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:33.996] ------------------------------
I0523 04:09:33.997] [sig-storage] Projected secret 
I0523 04:09:33.997]   should be consumable from pods in volume [NodeConformance] [Conformance]
I0523 04:09:33.997]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:33.997] [BeforeEach] [sig-storage] Projected secret
... skipping 10 lines ...
I0523 04:09:33.999] I0523 03:49:16.969813      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:33.999] [It] should be consumable from pods in volume [NodeConformance] [Conformance]
I0523 04:09:34.000]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.000] I0523 03:49:16.972088      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.000] STEP: Creating projection with secret that has name projected-secret-test-e4a8f46f-3263-43ff-8560-c55dfe8f0782
I0523 04:09:34.000] STEP: Creating a pod to test consume secrets
I0523 04:09:34.001] May 23 03:49:16.979: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bfa9e711-c688-4267-bb7e-a8884674bb34" in namespace "projected-9241" to be "Succeeded or Failed"
I0523 04:09:34.001] May 23 03:49:16.981: INFO: Pod "pod-projected-secrets-bfa9e711-c688-4267-bb7e-a8884674bb34": Phase="Pending", Reason="", readiness=false. Elapsed: 1.841618ms
I0523 04:09:34.001] May 23 03:49:18.985: INFO: Pod "pod-projected-secrets-bfa9e711-c688-4267-bb7e-a8884674bb34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005321054s
I0523 04:09:34.001] May 23 03:49:20.988: INFO: Pod "pod-projected-secrets-bfa9e711-c688-4267-bb7e-a8884674bb34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008936667s
I0523 04:09:34.002] STEP: Saw pod success
I0523 04:09:34.002] May 23 03:49:20.988: INFO: Pod "pod-projected-secrets-bfa9e711-c688-4267-bb7e-a8884674bb34" satisfied condition "Succeeded or Failed"
I0523 04:09:34.002] May 23 03:49:20.991: INFO: Trying to get logs from node kind-worker pod pod-projected-secrets-bfa9e711-c688-4267-bb7e-a8884674bb34 container projected-secret-volume-test: <nil>
I0523 04:09:34.002] STEP: delete the pod
I0523 04:09:34.002] May 23 03:49:21.005: INFO: Waiting for pod pod-projected-secrets-bfa9e711-c688-4267-bb7e-a8884674bb34 to disappear
I0523 04:09:34.002] May 23 03:49:21.007: INFO: Pod pod-projected-secrets-bfa9e711-c688-4267-bb7e-a8884674bb34 no longer exists
I0523 04:09:34.002] [AfterEach] [sig-storage] Projected secret
I0523 04:09:34.003]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.003] May 23 03:49:21.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.003] STEP: Destroying namespace "projected-9241" for this suite.
I0523 04:09:34.003] •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":292,"completed":218,"skipped":3428,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.003] SSSSS
I0523 04:09:34.003] ------------------------------
I0523 04:09:34.004] [sig-storage] Projected secret 
I0523 04:09:34.004]   should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
I0523 04:09:34.004]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.004] [BeforeEach] [sig-storage] Projected secret
... skipping 10 lines ...
I0523 04:09:34.006] I0523 03:49:21.139632      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.006] [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
I0523 04:09:34.007]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.007] I0523 03:49:21.142117      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.007] STEP: Creating secret with name projected-secret-test-7d0f3474-5de3-4c51-a602-60ff6925a70c
I0523 04:09:34.007] STEP: Creating a pod to test consume secrets
I0523 04:09:34.008] May 23 03:49:21.150: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0932535a-708d-4457-8f88-9f94b01c425e" in namespace "projected-4191" to be "Succeeded or Failed"
I0523 04:09:34.008] May 23 03:49:21.152: INFO: Pod "pod-projected-secrets-0932535a-708d-4457-8f88-9f94b01c425e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10685ms
I0523 04:09:34.008] May 23 03:49:23.155: INFO: Pod "pod-projected-secrets-0932535a-708d-4457-8f88-9f94b01c425e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005325522s
I0523 04:09:34.008] STEP: Saw pod success
I0523 04:09:34.009] May 23 03:49:23.155: INFO: Pod "pod-projected-secrets-0932535a-708d-4457-8f88-9f94b01c425e" satisfied condition "Succeeded or Failed"
I0523 04:09:34.009] May 23 03:49:23.158: INFO: Trying to get logs from node kind-worker pod pod-projected-secrets-0932535a-708d-4457-8f88-9f94b01c425e container secret-volume-test: <nil>
I0523 04:09:34.009] STEP: delete the pod
I0523 04:09:34.009] May 23 03:49:23.170: INFO: Waiting for pod pod-projected-secrets-0932535a-708d-4457-8f88-9f94b01c425e to disappear
I0523 04:09:34.009] May 23 03:49:23.172: INFO: Pod pod-projected-secrets-0932535a-708d-4457-8f88-9f94b01c425e no longer exists
I0523 04:09:34.010] [AfterEach] [sig-storage] Projected secret
I0523 04:09:34.010]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.010] May 23 03:49:23.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.010] STEP: Destroying namespace "projected-4191" for this suite.
I0523 04:09:34.011] •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":292,"completed":219,"skipped":3433,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.011] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:34.011] ------------------------------
I0523 04:09:34.011] [sig-storage] Downward API volume 
I0523 04:09:34.011]   should provide podname only [NodeConformance] [Conformance]
I0523 04:09:34.011]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.011] [BeforeEach] [sig-storage] Downward API volume
... skipping 11 lines ...
I0523 04:09:34.014] [BeforeEach] [sig-storage] Downward API volume
I0523 04:09:34.014]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
I0523 04:09:34.014] I0523 03:49:23.308587      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.014] [It] should provide podname only [NodeConformance] [Conformance]
I0523 04:09:34.015]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.015] STEP: Creating a pod to test downward API volume plugin
I0523 04:09:34.015] May 23 03:49:23.313: INFO: Waiting up to 5m0s for pod "downwardapi-volume-014444e6-8f7a-4575-adf8-f5b988723234" in namespace "downward-api-4769" to be "Succeeded or Failed"
I0523 04:09:34.015] May 23 03:49:23.315: INFO: Pod "downwardapi-volume-014444e6-8f7a-4575-adf8-f5b988723234": Phase="Pending", Reason="", readiness=false. Elapsed: 2.387615ms
I0523 04:09:34.015] May 23 03:49:25.319: INFO: Pod "downwardapi-volume-014444e6-8f7a-4575-adf8-f5b988723234": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005575897s
I0523 04:09:34.016] STEP: Saw pod success
I0523 04:09:34.016] May 23 03:49:25.319: INFO: Pod "downwardapi-volume-014444e6-8f7a-4575-adf8-f5b988723234" satisfied condition "Succeeded or Failed"
I0523 04:09:34.016] May 23 03:49:25.321: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-014444e6-8f7a-4575-adf8-f5b988723234 container client-container: <nil>
I0523 04:09:34.016] STEP: delete the pod
I0523 04:09:34.016] May 23 03:49:25.335: INFO: Waiting for pod downwardapi-volume-014444e6-8f7a-4575-adf8-f5b988723234 to disappear
I0523 04:09:34.017] May 23 03:49:25.337: INFO: Pod downwardapi-volume-014444e6-8f7a-4575-adf8-f5b988723234 no longer exists
I0523 04:09:34.017] [AfterEach] [sig-storage] Downward API volume
I0523 04:09:34.017]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.017] May 23 03:49:25.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.017] STEP: Destroying namespace "downward-api-4769" for this suite.
I0523 04:09:34.018] •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":292,"completed":220,"skipped":3464,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.018] SSSSSSSSSSSSSS
I0523 04:09:34.018] ------------------------------
I0523 04:09:34.018] [sig-storage] Projected configMap 
I0523 04:09:34.018]   should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
I0523 04:09:34.018]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.019] [BeforeEach] [sig-storage] Projected configMap
... skipping 10 lines ...
I0523 04:09:34.021] I0523 03:49:25.466609      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.021] [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
I0523 04:09:34.021]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.021] I0523 03:49:25.469177      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.022] STEP: Creating configMap with name projected-configmap-test-volume-7e9a0ed3-74b1-4e1e-b6c1-08b38a4ac7f5
I0523 04:09:34.022] STEP: Creating a pod to test consume configMaps
I0523 04:09:34.022] May 23 03:49:25.477: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-09724ee4-b635-4c4e-8f29-134d7f9b10af" in namespace "projected-9754" to be "Succeeded or Failed"
I0523 04:09:34.022] May 23 03:49:25.478: INFO: Pod "pod-projected-configmaps-09724ee4-b635-4c4e-8f29-134d7f9b10af": Phase="Pending", Reason="", readiness=false. Elapsed: 1.720567ms
I0523 04:09:34.023] May 23 03:49:27.481: INFO: Pod "pod-projected-configmaps-09724ee4-b635-4c4e-8f29-134d7f9b10af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004840189s
I0523 04:09:34.023] STEP: Saw pod success
I0523 04:09:34.023] May 23 03:49:27.481: INFO: Pod "pod-projected-configmaps-09724ee4-b635-4c4e-8f29-134d7f9b10af" satisfied condition "Succeeded or Failed"
I0523 04:09:34.023] May 23 03:49:27.484: INFO: Trying to get logs from node kind-worker pod pod-projected-configmaps-09724ee4-b635-4c4e-8f29-134d7f9b10af container projected-configmap-volume-test: <nil>
I0523 04:09:34.023] STEP: delete the pod
I0523 04:09:34.023] May 23 03:49:27.529: INFO: Waiting for pod pod-projected-configmaps-09724ee4-b635-4c4e-8f29-134d7f9b10af to disappear
I0523 04:09:34.024] May 23 03:49:27.531: INFO: Pod pod-projected-configmaps-09724ee4-b635-4c4e-8f29-134d7f9b10af no longer exists
I0523 04:09:34.024] [AfterEach] [sig-storage] Projected configMap
I0523 04:09:34.024]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.024] May 23 03:49:27.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.024] STEP: Destroying namespace "projected-9754" for this suite.
I0523 04:09:34.025] •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":292,"completed":221,"skipped":3478,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.025] SSSSSSSSSSSSSSSSSS
I0523 04:09:34.025] ------------------------------
I0523 04:09:34.025] [k8s.io] Docker Containers 
I0523 04:09:34.025]   should use the image defaults if command and args are blank [NodeConformance] [Conformance]
I0523 04:09:34.025]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.025] [BeforeEach] [k8s.io] Docker Containers
... skipping 12 lines ...
I0523 04:09:34.028]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.028] I0523 03:49:27.665091      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.028] [AfterEach] [k8s.io] Docker Containers
I0523 04:09:34.029]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.029] May 23 03:49:31.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.029] STEP: Destroying namespace "containers-2893" for this suite.
I0523 04:09:34.029] •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":292,"completed":222,"skipped":3496,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.029] SSSSS
I0523 04:09:34.030] ------------------------------
I0523 04:09:34.030] [k8s.io] InitContainer [NodeConformance] 
I0523 04:09:34.030]   should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0523 04:09:34.030]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.030] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0523 04:09:34.030]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
I0523 04:09:34.031] STEP: Creating a kubernetes client
I0523 04:09:34.031] May 23 03:49:31.685: INFO: >>> kubeConfig: /tmp/kubeconfig-142995523
I0523 04:09:34.031] STEP: Building a namespace api object, basename init-container
... skipping 4 lines ...
I0523 04:09:34.032] STEP: Waiting for a default service account to be provisioned in namespace
I0523 04:09:34.032] I0523 03:49:31.807911      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.033] I0523 03:49:31.807939      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.033] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0523 04:09:34.033]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162
I0523 04:09:34.033] I0523 03:49:31.810234      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.033] [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0523 04:09:34.034]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.034] STEP: creating the pod
I0523 04:09:34.034] May 23 03:49:31.810: INFO: PodSpec: initContainers in spec.initContainers
I0523 04:09:34.034] I0523 03:49:31.815567      17 retrywatcher.go:247] Starting RetryWatcher.
I0523 04:09:34.043] May 23 03:50:19.405: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-36ea98d4-2d04-4ea6-acb3-85525fe5ae29", GenerateName:"", Namespace:"init-container-6815", SelfLink:"/api/v1/namespaces/init-container-6815/pods/pod-init-36ea98d4-2d04-4ea6-acb3-85525fe5ae29", UID:"6d6e869a-dede-4d50-92d8-f2f01bb4ad32", ResourceVersion:"23977", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63725802571, loc:(*time.Location)(0x8046260)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"810388272"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00338ad40), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00338adc0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00338ae40), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00338aec0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-nb2kp", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00237a640), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nb2kp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nb2kp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nb2kp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0020a9f98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kind-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001fae2a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00251a070)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00251a140)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00251a148), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00251a14c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725802571, loc:(*time.Location)(0x8046260)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725802571, loc:(*time.Location)(0x8046260)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725802571, loc:(*time.Location)(0x8046260)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725802571, loc:(*time.Location)(0x8046260)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.2", PodIP:"10.244.2.85", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.85"}}, StartTime:(*v1.Time)(0xc00338af40), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001fae380)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001fae3f0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://f69a51c92c554f7eeadaeb79e2caa348b27ceb5f76dcf631c4a6e5f8a3164ac2", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00338b040), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00338afc0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc00251a1cf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
I0523 04:09:34.043] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0523 04:09:34.046]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.046] May 23 03:50:19.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.046] I0523 03:50:19.405892      17 retrywatcher.go:147] Stopping RetryWatcher.
I0523 04:09:34.046] I0523 03:50:19.406087      17 retrywatcher.go:275] Stopping RetryWatcher.
I0523 04:09:34.046] STEP: Destroying namespace "init-container-6815" for this suite.
I0523 04:09:34.046] 
I0523 04:09:34.046] • [SLOW TEST:47.727 seconds]
I0523 04:09:34.047] [k8s.io] InitContainer [NodeConformance]
I0523 04:09:34.047] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
I0523 04:09:34.047]   should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0523 04:09:34.047]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.047] ------------------------------
I0523 04:09:34.048] {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":292,"completed":223,"skipped":3501,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.049] SSSSSSS
I0523 04:09:34.049] ------------------------------
I0523 04:09:34.049] [sig-network] Services 
I0523 04:09:34.049]   should be able to change the type from ExternalName to ClusterIP [Conformance]
I0523 04:09:34.049]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.050] [BeforeEach] [sig-network] Services
... skipping 43 lines ...
I0523 04:09:34.058] • [SLOW TEST:6.626 seconds]
I0523 04:09:34.059] [sig-network] Services
I0523 04:09:34.059] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
I0523 04:09:34.059]   should be able to change the type from ExternalName to ClusterIP [Conformance]
I0523 04:09:34.059]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.059] ------------------------------
I0523 04:09:34.060] {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":292,"completed":224,"skipped":3508,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.060] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:34.060] ------------------------------
I0523 04:09:34.060] [sig-api-machinery] Namespaces [Serial] 
I0523 04:09:34.060]   should ensure that all pods are removed when a namespace is deleted [Conformance]
I0523 04:09:34.060]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.060] [BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 41 lines ...
I0523 04:09:34.067] • [SLOW TEST:29.407 seconds]
I0523 04:09:34.067] [sig-api-machinery] Namespaces [Serial]
I0523 04:09:34.068] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:34.068]   should ensure that all pods are removed when a namespace is deleted [Conformance]
I0523 04:09:34.068]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.068] ------------------------------
I0523 04:09:34.069] {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":292,"completed":225,"skipped":3541,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.069] SSSSSSSSSSSS
I0523 04:09:34.069] ------------------------------
I0523 04:09:34.069] [sig-storage] Downward API volume 
I0523 04:09:34.069]   should update annotations on modification [NodeConformance] [Conformance]
I0523 04:09:34.069]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.069] [BeforeEach] [sig-storage] Downward API volume
... skipping 16 lines ...
I0523 04:09:34.072] STEP: Creating the pod
I0523 04:09:34.073] May 23 03:50:58.102: INFO: Successfully updated pod "annotationupdate14bdf778-2ffe-4ce4-9e36-5d454fdb92a8"
I0523 04:09:34.073] [AfterEach] [sig-storage] Downward API volume
I0523 04:09:34.073]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.073] May 23 03:51:00.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.073] STEP: Destroying namespace "downward-api-4967" for this suite.
I0523 04:09:34.074] •{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":292,"completed":226,"skipped":3553,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.074] SSSS
I0523 04:09:34.074] ------------------------------
I0523 04:09:34.074] [sig-apps] Deployment 
I0523 04:09:34.074]   deployment should support proportional scaling [Conformance]
I0523 04:09:34.074]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.074] [BeforeEach] [sig-apps] Deployment
... skipping 45 lines ...
I0523 04:09:34.089] May 23 03:51:06.321: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797  deployment-215 /apis/apps/v1/namespaces/deployment-215/replicasets/webserver-deployment-84855cf797 164f5f43-649b-405c-94cc-ed0004b39d1d 24451 3 2020-05-23 03:51:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment ed9af42a-b28b-4ba1-8136-fd2d43959ab1 0xc002ad5af7 0xc002ad5af8}] []  [{kube-controller-manager Update apps/v1 2020-05-23 03:51:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed9af42a-b28b-4ba1-8136-fd2d43959ab1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:84855cf797] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002ad5b78 <nil> ClusterFirst map[]   <nil>  false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
I0523 04:09:34.089] May 23 03:51:06.348: INFO: Pod "webserver-deployment-6676bcd6d4-br677" is not available:
I0523 04:09:34.094] &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-br677 webserver-deployment-6676bcd6d4- deployment-215 /api/v1/namespaces/deployment-215/pods/webserver-deployment-6676bcd6d4-br677 210c2379-095b-4200-9b48-86bfb2f20dc0 24412 0 2020-05-23 03:51:04 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 9c76be41-3503-4733-add2-b41b54843aca 0xc0027d3a07 0xc0027d3a08}] []  [{kube-controller-manager Update v1 2020-05-23 03:51:04 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9c76be41-3503-4733-add2-b41b54843aca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-23 03:51:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vnh4l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vnh4l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vnh4l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 03:51:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 03:51:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 03:51:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 03:51:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-23 03:51:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
I0523 04:09:34.095] May 23 03:51:06.349: INFO: Pod "webserver-deployment-6676bcd6d4-c7mhr" is not available:
I0523 04:09:34.098] &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-c7mhr webserver-deployment-6676bcd6d4- deployment-215 /api/v1/namespaces/deployment-215/pods/webserver-deployment-6676bcd6d4-c7mhr 830e24ee-6f60-4676-9e06-2da4de103546 24465 0 2020-05-23 03:51:06 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 9c76be41-3503-4733-add2-b41b54843aca 0xc0027d3d60 0xc0027d3d61}] []  [{kube-controller-manager Update v1 2020-05-23 03:51:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9c76be41-3503-4733-add2-b41b54843aca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vnh4l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vnh4l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vnh4l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 03:51:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
I0523 04:09:34.098] May 23 03:51:06.349: INFO: Pod "webserver-deployment-6676bcd6d4-m2csb" is not available:
I0523 04:09:34.103] &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-m2csb webserver-deployment-6676bcd6d4- deployment-215 /api/v1/namespaces/deployment-215/pods/webserver-deployment-6676bcd6d4-m2csb 8732dae8-0cc2-44da-a874-5fb8b450e418 24448 0 2020-05-23 03:51:04 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 9c76be41-3503-4733-add2-b41b54843aca 0xc0027d3f50 0xc0027d3f51}] []  [{kube-controller-manager Update v1 2020-05-23 03:51:04 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9c76be41-3503-4733-add2-b41b54843aca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-23 03:51:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.248\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vnh4l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vnh4l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vnh4l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 03:51:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 03:51:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 03:51:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 03:51:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.1.248,StartTime:2020-05-23 03:51:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.248,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
I0523 04:09:34.104] May 23 03:51:06.349: INFO: Pod "webserver-deployment-6676bcd6d4-mr8dn" is not available:
I0523 04:09:34.107] &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-mr8dn webserver-deployment-6676bcd6d4- deployment-215 /api/v1/namespaces/deployment-215/pods/webserver-deployment-6676bcd6d4-mr8dn 784499db-3803-4015-bd74-7e014f56663d 24474 0 2020-05-23 03:51:06 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 9c76be41-3503-4733-add2-b41b54843aca 0xc004234140 0xc004234141}] []  [{kube-controller-manager Update v1 2020-05-23 03:51:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9c76be41-3503-4733-add2-b41b54843aca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vnh4l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vnh4l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vnh4l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 03:51:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
I0523 04:09:34.107] May 23 03:51:06.349: INFO: Pod "webserver-deployment-6676bcd6d4-mwhq8" is not available:
I0523 04:09:34.110] &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-mwhq8 webserver-deployment-6676bcd6d4- deployment-215 /api/v1/namespaces/deployment-215/pods/webserver-deployment-6676bcd6d4-mwhq8 cd82b71e-420b-4e23-8ba0-c1986ac100ec 24463 0 2020-05-23 03:51:06 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 9c76be41-3503-4733-add2-b41b54843aca 0xc004234280 0xc004234281}] []  [{kube-controller-manager Update v1 2020-05-23 03:51:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9c76be41-3503-4733-add2-b41b54843aca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vnh4l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vnh4l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vnh4l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
I0523 04:09:34.111] May 23 03:51:06.350: INFO: Pod "webserver-deployment-6676bcd6d4-rx45z" is not available:
I0523 04:09:34.115] &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-rx45z webserver-deployment-6676bcd6d4- deployment-215 /api/v1/namespaces/deployment-215/pods/webserver-deployment-6676bcd6d4-rx45z 05aa7c96-585d-43eb-b988-0d66e389a255 24416 0 2020-05-23 03:51:04 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 9c76be41-3503-4733-add2-b41b54843aca 0xc0042343a7 0xc0042343a8}] []  [{kube-controller-manager Update v1 2020-05-23 03:51:04 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9c76be41-3503-4733-add2-b41b54843aca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-23 03:51:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vnh4l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vnh4l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vnh4l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 03:51:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 03:51:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 03:51:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 03:51:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-23 03:51:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
I0523 04:09:34.115] May 23 03:51:06.350: INFO: Pod "webserver-deployment-6676bcd6d4-vxwkr" is not available:
I0523 04:09:34.121] &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-vxwkr webserver-deployment-6676bcd6d4- deployment-215 /api/v1/namespaces/deployment-215/pods/webserver-deployment-6676bcd6d4-vxwkr e71973ac-77a2-425b-b642-7d9093924f60 24449 0 2020-05-23 03:51:04 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 9c76be41-3503-4733-add2-b41b54843aca 0xc004234550 0xc004234551}] []  [{kube-controller-manager Update v1 2020-05-23 03:51:04 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9c76be41-3503-4733-add2-b41b54843aca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-23 03:51:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.247\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vnh4l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vnh4l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vnh4l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 03:51:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 03:51:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 03:51:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 03:51:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.1.247,StartTime:2020-05-23 03:51:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.247,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
I0523 04:09:34.121] May 23 03:51:06.350: INFO: Pod "webserver-deployment-6676bcd6d4-zjngc" is not available:
I0523 04:09:34.125] &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-zjngc webserver-deployment-6676bcd6d4- deployment-215 /api/v1/namespaces/deployment-215/pods/webserver-deployment-6676bcd6d4-zjngc 1fd179ec-01f4-4eb4-9866-4232a1317bdb 24419 0 2020-05-23 03:51:04 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 9c76be41-3503-4733-add2-b41b54843aca 0xc004234720 0xc004234721}] []  [{kube-controller-manager Update v1 2020-05-23 03:51:04 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9c76be41-3503-4733-add2-b41b54843aca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-23 03:51:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vnh4l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vnh4l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vnh4l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 03:51:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 03:51:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 03:51:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 03:51:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-23 03:51:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
I0523 04:09:34.125] May 23 03:51:06.351: INFO: Pod "webserver-deployment-84855cf797-4fl64" is available:
I0523 04:09:34.130] &Pod{ObjectMeta:{webserver-deployment-84855cf797-4fl64 webserver-deployment-84855cf797- deployment-215 /api/v1/namespaces/deployment-215/pods/webserver-deployment-84855cf797-4fl64 4e54e573-4190-4437-aea5-ebf976317c89 24352 0 2020-05-23 03:51:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 164f5f43-649b-405c-94cc-ed0004b39d1d 0xc0042348c0 0xc0042348c1}] []  [{kube-controller-manager Update v1 2020-05-23 03:51:00 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"164f5f43-649b-405c-94cc-ed0004b39d1d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-23 03:51:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.91\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vnh4l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vnh4l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vnh4l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 03:51:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 03:51:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 03:51:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 03:51:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.2,PodIP:10.244.2.91,StartTime:2020-05-23 03:51:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-23 03:51:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6ff38b6cc4b934c195cfd166a3d9979e2d35b636c5e616bea7f34473996bd969,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.91,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
I0523 04:09:34.130] May 23 03:51:06.351: INFO: Pod "webserver-deployment-84855cf797-5mc62" is not available:
I0523 04:09:34.134] &Pod{ObjectMeta:{webserver-deployment-84855cf797-5mc62 webserver-deployment-84855cf797- deployment-215 /api/v1/namespaces/deployment-215/pods/webserver-deployment-84855cf797-5mc62 21873f91-bb6e-4dc0-8367-7013eb828e3b 24472 0 2020-05-23 03:51:06 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 164f5f43-649b-405c-94cc-ed0004b39d1d 0xc004234a80 0xc004234a81}] []  [{kube-controller-manager Update v1 2020-05-23 03:51:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"164f5f43-649b-405c-94cc-ed0004b39d1d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vnh4l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vnh4l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vnh4l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 03:51:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
... skipping 31 lines ...
I0523 04:09:34.188] • [SLOW TEST:6.273 seconds]
I0523 04:09:34.188] [sig-apps] Deployment
I0523 04:09:34.188] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0523 04:09:34.188]   deployment should support proportional scaling [Conformance]
I0523 04:09:34.188]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.188] ------------------------------
I0523 04:09:34.188] {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":292,"completed":227,"skipped":3557,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.189] SSSSSSSSSSSSSSSSSS
I0523 04:09:34.189] ------------------------------
I0523 04:09:34.189] [k8s.io] Container Runtime blackbox test on terminated container 
I0523 04:09:34.189]   should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
I0523 04:09:34.189]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.189] [BeforeEach] [k8s.io] Container Runtime
... skipping 9 lines ...
I0523 04:09:34.191] I0523 03:51:06.542824      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.191] I0523 03:51:06.542855      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.192] [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
I0523 04:09:34.192]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.192] STEP: create the container
I0523 04:09:34.192] I0523 03:51:06.548260      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.192] STEP: wait for the container to reach Failed
I0523 04:09:34.192] STEP: get the container status
I0523 04:09:34.193] STEP: the container should be terminated
I0523 04:09:34.193] STEP: the termination message should be set
I0523 04:09:34.193] May 23 03:51:13.611: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
I0523 04:09:34.193] STEP: delete the container
I0523 04:09:34.193] [AfterEach] [k8s.io] Container Runtime
... skipping 8 lines ...
I0523 04:09:34.194]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41
I0523 04:09:34.194]     on terminated container
I0523 04:09:34.194]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134
I0523 04:09:34.195]       should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
I0523 04:09:34.195]       /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.195] ------------------------------
I0523 04:09:34.195] {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":292,"completed":228,"skipped":3575,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.195] S
I0523 04:09:34.196] ------------------------------
I0523 04:09:34.196] [sig-storage] Downward API volume 
I0523 04:09:34.196]   should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:34.196]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.196] [BeforeEach] [sig-storage] Downward API volume
... skipping 11 lines ...
I0523 04:09:34.198] I0523 03:51:13.781911      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.198] [BeforeEach] [sig-storage] Downward API volume
I0523 04:09:34.199]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
I0523 04:09:34.199] [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:34.199]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.199] STEP: Creating a pod to test downward API volume plugin
I0523 04:09:34.199] May 23 03:51:13.791: INFO: Waiting up to 5m0s for pod "downwardapi-volume-72603b75-4ee6-4c4d-bbb2-7ed0a5629d7d" in namespace "downward-api-7115" to be "Succeeded or Failed"
I0523 04:09:34.200] May 23 03:51:13.798: INFO: Pod "downwardapi-volume-72603b75-4ee6-4c4d-bbb2-7ed0a5629d7d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.10529ms
I0523 04:09:34.200] May 23 03:51:15.801: INFO: Pod "downwardapi-volume-72603b75-4ee6-4c4d-bbb2-7ed0a5629d7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010286284s
I0523 04:09:34.200] May 23 03:51:17.804: INFO: Pod "downwardapi-volume-72603b75-4ee6-4c4d-bbb2-7ed0a5629d7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01347816s
I0523 04:09:34.200] STEP: Saw pod success
I0523 04:09:34.200] May 23 03:51:17.804: INFO: Pod "downwardapi-volume-72603b75-4ee6-4c4d-bbb2-7ed0a5629d7d" satisfied condition "Succeeded or Failed"
I0523 04:09:34.200] May 23 03:51:17.806: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-72603b75-4ee6-4c4d-bbb2-7ed0a5629d7d container client-container: <nil>
I0523 04:09:34.200] STEP: delete the pod
I0523 04:09:34.201] May 23 03:51:17.818: INFO: Waiting for pod downwardapi-volume-72603b75-4ee6-4c4d-bbb2-7ed0a5629d7d to disappear
I0523 04:09:34.201] May 23 03:51:17.819: INFO: Pod downwardapi-volume-72603b75-4ee6-4c4d-bbb2-7ed0a5629d7d no longer exists
I0523 04:09:34.201] [AfterEach] [sig-storage] Downward API volume
I0523 04:09:34.201]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.201] May 23 03:51:17.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.201] STEP: Destroying namespace "downward-api-7115" for this suite.
I0523 04:09:34.202] •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":229,"skipped":3576,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.202] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:34.202] ------------------------------
I0523 04:09:34.202] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
I0523 04:09:34.202]   should unconditionally reject operations on fail closed webhook [Conformance]
I0523 04:09:34.202]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.202] [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0523 04:09:34.202]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
I0523 04:09:34.203] STEP: Creating a kubernetes client
I0523 04:09:34.203] May 23 03:51:17.825: INFO: >>> kubeConfig: /tmp/kubeconfig-142995523
I0523 04:09:34.203] STEP: Building a namespace api object, basename webhook
... skipping 13 lines ...
I0523 04:09:34.205] STEP: Wait for the deployment to be ready
I0523 04:09:34.205] May 23 03:51:18.317: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
I0523 04:09:34.206] May 23 03:51:20.323: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725802678, loc:(*time.Location)(0x8046260)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725802678, loc:(*time.Location)(0x8046260)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725802678, loc:(*time.Location)(0x8046260)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725802678, loc:(*time.Location)(0x8046260)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)}
I0523 04:09:34.206] STEP: Deploying the webhook service
I0523 04:09:34.206] STEP: Verifying the service has paired with the endpoint
I0523 04:09:34.206] May 23 03:51:23.333: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
I0523 04:09:34.206] [It] should unconditionally reject operations on fail closed webhook [Conformance]
I0523 04:09:34.206]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.206] STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
I0523 04:09:34.206] STEP: create a namespace for the webhook
I0523 04:09:34.207] STEP: create a configmap should be unconditionally rejected by the webhook
I0523 04:09:34.207] [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0523 04:09:34.207]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.207] May 23 03:51:23.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.207] STEP: Destroying namespace "webhook-1268" for this suite.
I0523 04:09:34.207] STEP: Destroying namespace "webhook-1268-markers" for this suite.
I0523 04:09:34.207] [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0523 04:09:34.208]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
I0523 04:09:34.208] 
I0523 04:09:34.208] • [SLOW TEST:5.577 seconds]
I0523 04:09:34.208] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0523 04:09:34.208] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:34.208]   should unconditionally reject operations on fail closed webhook [Conformance]
I0523 04:09:34.209]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.209] ------------------------------
I0523 04:09:34.209] {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":292,"completed":230,"skipped":3607,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.209] SSSSSSSSSSSSSSSSSSS
I0523 04:09:34.209] ------------------------------
I0523 04:09:34.209] [sig-storage] Projected secret 
I0523 04:09:34.210]   should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:34.210]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.210] [BeforeEach] [sig-storage] Projected secret
... skipping 10 lines ...
I0523 04:09:34.212] I0523 03:51:23.535740      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.212] [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:34.212]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.213] I0523 03:51:23.538257      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.213] STEP: Creating projection with secret that has name projected-secret-test-map-7d391a21-056d-4783-b8cd-d9a155870776
I0523 04:09:34.213] STEP: Creating a pod to test consume secrets
I0523 04:09:34.213] May 23 03:51:23.547: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-36d5c1f4-a350-47fd-ad94-979f37ff556b" in namespace "projected-8913" to be "Succeeded or Failed"
I0523 04:09:34.214] May 23 03:51:23.550: INFO: Pod "pod-projected-secrets-36d5c1f4-a350-47fd-ad94-979f37ff556b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.488564ms
I0523 04:09:34.214] May 23 03:51:25.554: INFO: Pod "pod-projected-secrets-36d5c1f4-a350-47fd-ad94-979f37ff556b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006817689s
I0523 04:09:34.214] STEP: Saw pod success
I0523 04:09:34.214] May 23 03:51:25.554: INFO: Pod "pod-projected-secrets-36d5c1f4-a350-47fd-ad94-979f37ff556b" satisfied condition "Succeeded or Failed"
I0523 04:09:34.214] May 23 03:51:25.559: INFO: Trying to get logs from node kind-worker pod pod-projected-secrets-36d5c1f4-a350-47fd-ad94-979f37ff556b container projected-secret-volume-test: <nil>
I0523 04:09:34.214] STEP: delete the pod
I0523 04:09:34.215] May 23 03:51:25.580: INFO: Waiting for pod pod-projected-secrets-36d5c1f4-a350-47fd-ad94-979f37ff556b to disappear
I0523 04:09:34.215] May 23 03:51:25.582: INFO: Pod pod-projected-secrets-36d5c1f4-a350-47fd-ad94-979f37ff556b no longer exists
I0523 04:09:34.215] [AfterEach] [sig-storage] Projected secret
I0523 04:09:34.215]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.215] May 23 03:51:25.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.215] STEP: Destroying namespace "projected-8913" for this suite.
I0523 04:09:34.216] •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":231,"skipped":3626,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.216] 
I0523 04:09:34.216] ------------------------------
I0523 04:09:34.216] [sig-api-machinery] Secrets 
I0523 04:09:34.216]   should be consumable from pods in env vars [NodeConformance] [Conformance]
I0523 04:09:34.217]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.217] [BeforeEach] [sig-api-machinery] Secrets
... skipping 10 lines ...
I0523 04:09:34.219] I0523 03:51:25.715885      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.219] [It] should be consumable from pods in env vars [NodeConformance] [Conformance]
I0523 04:09:34.219]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.219] I0523 03:51:25.718234      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.220] STEP: Creating secret with name secret-test-281222e4-5f01-4ee7-8306-9473b5139f31
I0523 04:09:34.220] STEP: Creating a pod to test consume secrets
I0523 04:09:34.220] May 23 03:51:25.725: INFO: Waiting up to 5m0s for pod "pod-secrets-4bb67102-ae6e-46ef-9dc7-63f87f72c7bd" in namespace "secrets-5567" to be "Succeeded or Failed"
I0523 04:09:34.220] May 23 03:51:25.727: INFO: Pod "pod-secrets-4bb67102-ae6e-46ef-9dc7-63f87f72c7bd": Phase="Pending", Reason="", readiness=false. Elapsed: 1.642799ms
I0523 04:09:34.220] May 23 03:51:27.730: INFO: Pod "pod-secrets-4bb67102-ae6e-46ef-9dc7-63f87f72c7bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005135796s
I0523 04:09:34.221] STEP: Saw pod success
I0523 04:09:34.221] May 23 03:51:27.730: INFO: Pod "pod-secrets-4bb67102-ae6e-46ef-9dc7-63f87f72c7bd" satisfied condition "Succeeded or Failed"
I0523 04:09:34.221] May 23 03:51:27.733: INFO: Trying to get logs from node kind-worker pod pod-secrets-4bb67102-ae6e-46ef-9dc7-63f87f72c7bd container secret-env-test: <nil>
I0523 04:09:34.221] STEP: delete the pod
I0523 04:09:34.221] May 23 03:51:27.743: INFO: Waiting for pod pod-secrets-4bb67102-ae6e-46ef-9dc7-63f87f72c7bd to disappear
I0523 04:09:34.221] May 23 03:51:27.746: INFO: Pod pod-secrets-4bb67102-ae6e-46ef-9dc7-63f87f72c7bd no longer exists
I0523 04:09:34.222] [AfterEach] [sig-api-machinery] Secrets
I0523 04:09:34.222]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.222] May 23 03:51:27.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.222] STEP: Destroying namespace "secrets-5567" for this suite.
I0523 04:09:34.222] •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":292,"completed":232,"skipped":3626,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.223] SSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:34.223] ------------------------------
I0523 04:09:34.223] [k8s.io] InitContainer [NodeConformance] 
I0523 04:09:34.223]   should invoke init containers on a RestartAlways pod [Conformance]
I0523 04:09:34.223]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.223] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
... skipping 19 lines ...
I0523 04:09:34.227] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0523 04:09:34.227]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.227] May 23 03:51:32.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.227] I0523 03:51:32.424853      17 retrywatcher.go:147] Stopping RetryWatcher.
I0523 04:09:34.227] I0523 03:51:32.425142      17 retrywatcher.go:275] Stopping RetryWatcher.
I0523 04:09:34.227] STEP: Destroying namespace "init-container-1548" for this suite.
I0523 04:09:34.228] •{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":292,"completed":233,"skipped":3648,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.228] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:34.228] ------------------------------
I0523 04:09:34.228] [k8s.io] Probing container 
I0523 04:09:34.228]   should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
I0523 04:09:34.229]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.229] [BeforeEach] [k8s.io] Probing container
... skipping 27 lines ...
I0523 04:09:34.234] • [SLOW TEST:242.524 seconds]
I0523 04:09:34.234] [k8s.io] Probing container
I0523 04:09:34.234] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
I0523 04:09:34.234]   should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
I0523 04:09:34.235]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.235] ------------------------------
I0523 04:09:34.235] {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":292,"completed":234,"skipped":3689,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.235] SSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:34.235] ------------------------------
I0523 04:09:34.236] [k8s.io] Probing container 
I0523 04:09:34.236]   with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
I0523 04:09:34.236]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.236] [BeforeEach] [k8s.io] Probing container
... skipping 33 lines ...
I0523 04:09:34.242] • [SLOW TEST:20.145 seconds]
I0523 04:09:34.243] [k8s.io] Probing container
I0523 04:09:34.243] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
I0523 04:09:34.243]   with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
I0523 04:09:34.243]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.243] ------------------------------
I0523 04:09:34.243] {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":292,"completed":235,"skipped":3716,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.243] SSSSSSS
I0523 04:09:34.244] ------------------------------
I0523 04:09:34.244] [k8s.io] Probing container 
I0523 04:09:34.244]   with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
I0523 04:09:34.244]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.244] [BeforeEach] [k8s.io] Probing container
... skipping 21 lines ...
I0523 04:09:34.248] • [SLOW TEST:60.140 seconds]
I0523 04:09:34.248] [k8s.io] Probing container
I0523 04:09:34.248] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
I0523 04:09:34.248]   with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
I0523 04:09:34.248]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.249] ------------------------------
I0523 04:09:34.249] {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":292,"completed":236,"skipped":3723,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.249] SSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:34.249] ------------------------------
I0523 04:09:34.249] [sig-apps] Deployment 
I0523 04:09:34.249]   RecreateDeployment should delete old pods and create new ones [Conformance]
I0523 04:09:34.250]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.250] [BeforeEach] [sig-apps] Deployment
... skipping 35 lines ...
I0523 04:09:34.262] May 23 03:56:57.483: INFO: Pod "test-recreate-deployment-d5667d9c7-vvv8j" is not available:
I0523 04:09:34.267] &Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-vvv8j test-recreate-deployment-d5667d9c7- deployment-6013 /api/v1/namespaces/deployment-6013/pods/test-recreate-deployment-d5667d9c7-vvv8j 2754d471-c1bb-49cd-a805-2c7e02744f00 25928 0 2020-05-23 03:56:57 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 c9355999-efbe-4d9d-b444-d43d68cd07f5 0xc00254dc30 0xc00254dc31}] []  [{kube-controller-manager Update v1 2020-05-23 03:56:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c9355999-efbe-4d9d-b444-d43d68cd07f5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-23 03:56:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-c9wf4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-c9wf4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-c9wf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 03:56:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 03:56:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 03:56:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 03:56:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-23 03:56:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
I0523 04:09:34.267] [AfterEach] [sig-apps] Deployment
I0523 04:09:34.267]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.267] May 23 03:56:57.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.267] STEP: Destroying namespace "deployment-6013" for this suite.
I0523 04:09:34.268] •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":292,"completed":237,"skipped":3745,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.268] SSSSSSSSSSSSSSSSSS
I0523 04:09:34.268] ------------------------------
I0523 04:09:34.268] [sig-cli] Kubectl client Kubectl run pod 
I0523 04:09:34.268]   should create a pod from an image when restart is Never  [Conformance]
I0523 04:09:34.269]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.269] [BeforeEach] [sig-cli] Kubectl client
... skipping 26 lines ...
I0523 04:09:34.273] May 23 03:57:01.139: INFO: stderr: ""
I0523 04:09:34.273] May 23 03:57:01.139: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
I0523 04:09:34.273] [AfterEach] [sig-cli] Kubectl client
I0523 04:09:34.274]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.274] May 23 03:57:01.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.274] STEP: Destroying namespace "kubectl-1254" for this suite.
I0523 04:09:34.274] •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":292,"completed":238,"skipped":3763,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.274] SSS
I0523 04:09:34.274] ------------------------------
I0523 04:09:34.275] [sig-scheduling] LimitRange 
I0523 04:09:34.275]   should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
I0523 04:09:34.275]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.275] [BeforeEach] [sig-scheduling] LimitRange
... skipping 48 lines ...
I0523 04:09:34.283] • [SLOW TEST:7.187 seconds]
I0523 04:09:34.283] [sig-scheduling] LimitRange
I0523 04:09:34.284] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
I0523 04:09:34.284]   should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
I0523 04:09:34.284]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.284] ------------------------------
I0523 04:09:34.284] {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":292,"completed":239,"skipped":3766,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.284] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:34.285] ------------------------------
I0523 04:09:34.285] [sig-storage] ConfigMap 
I0523 04:09:34.285]   binary data should be reflected in volume [NodeConformance] [Conformance]
I0523 04:09:34.285]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.285] [BeforeEach] [sig-storage] ConfigMap
... skipping 16 lines ...
I0523 04:09:34.288] STEP: Waiting for pod with text data
I0523 04:09:34.288] STEP: Waiting for pod with binary data
I0523 04:09:34.288] [AfterEach] [sig-storage] ConfigMap
I0523 04:09:34.288]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.289] May 23 03:57:10.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.289] STEP: Destroying namespace "configmap-8191" for this suite.
I0523 04:09:34.289] •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":240,"skipped":3841,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.289] SSS
I0523 04:09:34.289] ------------------------------
I0523 04:09:34.289] [sig-storage] ConfigMap 
I0523 04:09:34.290]   should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
I0523 04:09:34.290]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.290] [BeforeEach] [sig-storage] ConfigMap
... skipping 10 lines ...
I0523 04:09:34.292] I0523 03:57:10.623445      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.292] [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
I0523 04:09:34.292]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.292] STEP: Creating configMap with name configmap-test-volume-b12beb63-5e17-4920-bf11-254bbc0da79c
I0523 04:09:34.292] I0523 03:57:10.625848      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.293] STEP: Creating a pod to test consume configMaps
I0523 04:09:34.293] May 23 03:57:10.633: INFO: Waiting up to 5m0s for pod "pod-configmaps-6d20501d-3b6a-48f0-86e6-a57e5e53ecac" in namespace "configmap-1877" to be "Succeeded or Failed"
I0523 04:09:34.293] May 23 03:57:10.635: INFO: Pod "pod-configmaps-6d20501d-3b6a-48f0-86e6-a57e5e53ecac": Phase="Pending", Reason="", readiness=false. Elapsed: 1.702533ms
I0523 04:09:34.293] May 23 03:57:12.638: INFO: Pod "pod-configmaps-6d20501d-3b6a-48f0-86e6-a57e5e53ecac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004825265s
I0523 04:09:34.293] STEP: Saw pod success
I0523 04:09:34.294] May 23 03:57:12.638: INFO: Pod "pod-configmaps-6d20501d-3b6a-48f0-86e6-a57e5e53ecac" satisfied condition "Succeeded or Failed"
I0523 04:09:34.294] May 23 03:57:12.640: INFO: Trying to get logs from node kind-worker pod pod-configmaps-6d20501d-3b6a-48f0-86e6-a57e5e53ecac container configmap-volume-test: <nil>
I0523 04:09:34.294] STEP: delete the pod
I0523 04:09:34.294] May 23 03:57:12.651: INFO: Waiting for pod pod-configmaps-6d20501d-3b6a-48f0-86e6-a57e5e53ecac to disappear
I0523 04:09:34.294] May 23 03:57:12.652: INFO: Pod pod-configmaps-6d20501d-3b6a-48f0-86e6-a57e5e53ecac no longer exists
I0523 04:09:34.294] [AfterEach] [sig-storage] ConfigMap
I0523 04:09:34.295]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.295] May 23 03:57:12.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.295] STEP: Destroying namespace "configmap-1877" for this suite.
I0523 04:09:34.295] •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":292,"completed":241,"skipped":3844,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.295] SSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:34.295] ------------------------------
I0523 04:09:34.296] [sig-network] DNS 
I0523 04:09:34.296]   should provide DNS for pods for Subdomain [Conformance]
I0523 04:09:34.296]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.296] [BeforeEach] [sig-network] DNS
... skipping 25 lines ...
I0523 04:09:34.303] May 23 03:57:16.810: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.303] May 23 03:57:16.812: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.303] May 23 03:57:16.819: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.304] May 23 03:57:16.822: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.304] May 23 03:57:16.824: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.304] May 23 03:57:16.826: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.305] May 23 03:57:16.831: INFO: Lookups using dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4751.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4751.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local jessie_udp@dns-test-service-2.dns-4751.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4751.svc.cluster.local]
I0523 04:09:34.305] 
I0523 04:09:34.305] May 23 03:57:21.834: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.306] May 23 03:57:21.837: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.306] May 23 03:57:21.839: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.306] May 23 03:57:21.842: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.306] May 23 03:57:21.847: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.307] May 23 03:57:21.849: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.307] May 23 03:57:21.851: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.307] May 23 03:57:21.853: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.308] May 23 03:57:21.857: INFO: Lookups using dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4751.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4751.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local jessie_udp@dns-test-service-2.dns-4751.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4751.svc.cluster.local]
I0523 04:09:34.308] 
I0523 04:09:34.308] May 23 03:57:26.835: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.309] May 23 03:57:26.838: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.309] May 23 03:57:26.840: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.309] May 23 03:57:26.842: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.310] May 23 03:57:26.849: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.310] May 23 03:57:26.851: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.310] May 23 03:57:26.853: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.310] May 23 03:57:26.855: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.311] May 23 03:57:26.859: INFO: Lookups using dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4751.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4751.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local jessie_udp@dns-test-service-2.dns-4751.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4751.svc.cluster.local]
I0523 04:09:34.311] 
I0523 04:09:34.311] May 23 03:57:31.835: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.312] May 23 03:57:31.837: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.312] May 23 03:57:31.840: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.312] May 23 03:57:31.842: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.313] May 23 03:57:31.849: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.313] May 23 03:57:31.851: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.313] May 23 03:57:31.853: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.313] May 23 03:57:31.855: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.314] May 23 03:57:31.859: INFO: Lookups using dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4751.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4751.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local jessie_udp@dns-test-service-2.dns-4751.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4751.svc.cluster.local]
I0523 04:09:34.314] 
I0523 04:09:34.314] May 23 03:57:36.834: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.315] May 23 03:57:36.837: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.315] May 23 03:57:36.839: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.315] May 23 03:57:36.842: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.316] May 23 03:57:36.849: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.316] May 23 03:57:36.851: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.316] May 23 03:57:36.853: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.317] May 23 03:57:36.856: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.317] May 23 03:57:36.861: INFO: Lookups using dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4751.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4751.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local jessie_udp@dns-test-service-2.dns-4751.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4751.svc.cluster.local]
I0523 04:09:34.317] 
I0523 04:09:34.318] May 23 03:57:41.835: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.318] May 23 03:57:41.837: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.318] May 23 03:57:41.839: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.319] May 23 03:57:41.842: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.319] May 23 03:57:41.848: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.319] May 23 03:57:41.850: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.319] May 23 03:57:41.852: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.320] May 23 03:57:41.854: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4751.svc.cluster.local from pod dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2: the server could not find the requested resource (get pods dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2)
I0523 04:09:34.320] May 23 03:57:41.858: INFO: Lookups using dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4751.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4751.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4751.svc.cluster.local jessie_udp@dns-test-service-2.dns-4751.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4751.svc.cluster.local]
I0523 04:09:34.320] 
I0523 04:09:34.320] May 23 03:57:46.858: INFO: DNS probes using dns-4751/dns-test-0dad4672-7644-4cf8-8522-86be204cc0c2 succeeded
I0523 04:09:34.320] 
I0523 04:09:34.320] STEP: deleting the pod
I0523 04:09:34.321] STEP: deleting the test headless service
I0523 04:09:34.321] [AfterEach] [sig-network] DNS
... skipping 4 lines ...
I0523 04:09:34.321] • [SLOW TEST:34.250 seconds]
I0523 04:09:34.321] [sig-network] DNS
I0523 04:09:34.321] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
I0523 04:09:34.322]   should provide DNS for pods for Subdomain [Conformance]
I0523 04:09:34.322]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.322] ------------------------------
I0523 04:09:34.322] {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":292,"completed":242,"skipped":3871,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.322] SSSSSS
I0523 04:09:34.322] ------------------------------
I0523 04:09:34.322] [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
I0523 04:09:34.323]   should perform canary updates and phased rolling updates of template modifications [Conformance]
I0523 04:09:34.323]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.323] [BeforeEach] [sig-apps] StatefulSet
... skipping 56 lines ...
I0523 04:09:34.332] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0523 04:09:34.332]   [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
I0523 04:09:34.332]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
I0523 04:09:34.332]     should perform canary updates and phased rolling updates of template modifications [Conformance]
I0523 04:09:34.332]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.333] ------------------------------
I0523 04:09:34.333] {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":292,"completed":243,"skipped":3877,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.333] SSSSSSSSS
I0523 04:09:34.333] ------------------------------
I0523 04:09:34.333] [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
I0523 04:09:34.333]   should have an terminated reason [NodeConformance] [Conformance]
I0523 04:09:34.333]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.334] [BeforeEach] [k8s.io] Kubelet
... skipping 16 lines ...
I0523 04:09:34.336] [It] should have an terminated reason [NodeConformance] [Conformance]
I0523 04:09:34.336]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.336] [AfterEach] [k8s.io] Kubelet
I0523 04:09:34.336]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.336] May 23 03:59:21.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.337] STEP: Destroying namespace "kubelet-test-4170" for this suite.
I0523 04:09:34.337] •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":292,"completed":244,"skipped":3886,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.337] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:34.337] ------------------------------
I0523 04:09:34.337] [sig-cli] Kubectl client Kubectl label 
I0523 04:09:34.337]   should update the label on a resource  [Conformance]
I0523 04:09:34.337]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.337] [BeforeEach] [sig-cli] Kubectl client
... skipping 54 lines ...
I0523 04:09:34.346] May 23 03:59:24.458: INFO: stderr: ""
I0523 04:09:34.346] May 23 03:59:24.458: INFO: stdout: ""
I0523 04:09:34.346] [AfterEach] [sig-cli] Kubectl client
I0523 04:09:34.346]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.346] May 23 03:59:24.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.346] STEP: Destroying namespace "kubectl-9762" for this suite.
I0523 04:09:34.347] •{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":292,"completed":245,"skipped":3925,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.347] S
I0523 04:09:34.347] ------------------------------
I0523 04:09:34.347] [sig-storage] EmptyDir volumes 
I0523 04:09:34.347]   should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:34.347]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.347] [BeforeEach] [sig-storage] EmptyDir volumes
... skipping 9 lines ...
I0523 04:09:34.349] I0523 03:59:24.590109      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.349] I0523 03:59:24.590143      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.350] [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:34.350]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.350] I0523 03:59:24.592350      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.350] STEP: Creating a pod to test emptydir 0644 on tmpfs
I0523 04:09:34.350] May 23 03:59:24.597: INFO: Waiting up to 5m0s for pod "pod-cc0c256d-af33-46c4-9b2e-982f4d9d01db" in namespace "emptydir-6527" to be "Succeeded or Failed"
I0523 04:09:34.351] May 23 03:59:24.599: INFO: Pod "pod-cc0c256d-af33-46c4-9b2e-982f4d9d01db": Phase="Pending", Reason="", readiness=false. Elapsed: 1.923ms
I0523 04:09:34.351] May 23 03:59:26.602: INFO: Pod "pod-cc0c256d-af33-46c4-9b2e-982f4d9d01db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005117927s
I0523 04:09:34.351] STEP: Saw pod success
I0523 04:09:34.351] May 23 03:59:26.602: INFO: Pod "pod-cc0c256d-af33-46c4-9b2e-982f4d9d01db" satisfied condition "Succeeded or Failed"
I0523 04:09:34.351] May 23 03:59:26.604: INFO: Trying to get logs from node kind-worker pod pod-cc0c256d-af33-46c4-9b2e-982f4d9d01db container test-container: <nil>
I0523 04:09:34.351] STEP: delete the pod
I0523 04:09:34.352] May 23 03:59:26.623: INFO: Waiting for pod pod-cc0c256d-af33-46c4-9b2e-982f4d9d01db to disappear
I0523 04:09:34.352] May 23 03:59:26.625: INFO: Pod pod-cc0c256d-af33-46c4-9b2e-982f4d9d01db no longer exists
I0523 04:09:34.352] [AfterEach] [sig-storage] EmptyDir volumes
I0523 04:09:34.352]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.352] May 23 03:59:26.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.352] STEP: Destroying namespace "emptydir-6527" for this suite.
I0523 04:09:34.353] •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":246,"skipped":3926,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.353] SSS
I0523 04:09:34.353] ------------------------------
I0523 04:09:34.353] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
I0523 04:09:34.353]   works for CRD with validation schema [Conformance]
I0523 04:09:34.353]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.354] [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 46 lines ...
I0523 04:09:34.370] May 23 03:59:31.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-142995523 explain e2e-test-crd-publish-openapi-11-crds.spec'
I0523 04:09:34.370] May 23 03:59:32.076: INFO: stderr: ""
I0523 04:09:34.370] May 23 03:59:32.076: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-11-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
I0523 04:09:34.370] May 23 03:59:32.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-142995523 explain e2e-test-crd-publish-openapi-11-crds.spec.bars'
I0523 04:09:34.370] May 23 03:59:32.297: INFO: stderr: ""
I0523 04:09:34.371] May 23 03:59:32.297: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-11-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
I0523 04:09:34.371] STEP: kubectl explain works to return error when explain is called on property that doesn't exist
I0523 04:09:34.371] May 23 03:59:32.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-142995523 explain e2e-test-crd-publish-openapi-11-crds.spec.bars2'
I0523 04:09:34.372] May 23 03:59:32.517: INFO: rc: 1
I0523 04:09:34.372] [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
I0523 04:09:34.372]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.372] May 23 03:59:35.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.372] STEP: Destroying namespace "crd-publish-openapi-9629" for this suite.
I0523 04:09:34.372] 
I0523 04:09:34.373] • [SLOW TEST:8.702 seconds]
I0523 04:09:34.373] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
I0523 04:09:34.373] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:34.373]   works for CRD with validation schema [Conformance]
I0523 04:09:34.373]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.373] ------------------------------
I0523 04:09:34.374] {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":292,"completed":247,"skipped":3929,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.374] SSSSS
I0523 04:09:34.374] ------------------------------
I0523 04:09:34.374] [sig-network] Services 
I0523 04:09:34.374]   should be able to change the type from ExternalName to NodePort [Conformance]
I0523 04:09:34.374]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.374] [BeforeEach] [sig-network] Services
... skipping 48 lines ...
I0523 04:09:34.382] • [SLOW TEST:7.018 seconds]
I0523 04:09:34.382] [sig-network] Services
I0523 04:09:34.383] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
I0523 04:09:34.383]   should be able to change the type from ExternalName to NodePort [Conformance]
I0523 04:09:34.383]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.383] ------------------------------
I0523 04:09:34.383] {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":292,"completed":248,"skipped":3934,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.383] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:34.383] ------------------------------
I0523 04:09:34.384] [sig-storage] Projected downwardAPI 
I0523 04:09:34.384]   should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:34.384]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.384] [BeforeEach] [sig-storage] Projected downwardAPI
... skipping 11 lines ...
I0523 04:09:34.385] [BeforeEach] [sig-storage] Projected downwardAPI
I0523 04:09:34.385]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
I0523 04:09:34.386] [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:34.386]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.386] I0523 03:59:42.484135      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.386] STEP: Creating a pod to test downward API volume plugin
I0523 04:09:34.386] May 23 03:59:42.489: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cfa0496f-f7f2-401e-a416-1ed07223764d" in namespace "projected-9890" to be "Succeeded or Failed"
I0523 04:09:34.386] May 23 03:59:42.491: INFO: Pod "downwardapi-volume-cfa0496f-f7f2-401e-a416-1ed07223764d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054933ms
I0523 04:09:34.386] May 23 03:59:44.494: INFO: Pod "downwardapi-volume-cfa0496f-f7f2-401e-a416-1ed07223764d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005151029s
I0523 04:09:34.387] STEP: Saw pod success
I0523 04:09:34.387] May 23 03:59:44.494: INFO: Pod "downwardapi-volume-cfa0496f-f7f2-401e-a416-1ed07223764d" satisfied condition "Succeeded or Failed"
I0523 04:09:34.387] May 23 03:59:44.496: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-cfa0496f-f7f2-401e-a416-1ed07223764d container client-container: <nil>
I0523 04:09:34.387] STEP: delete the pod
I0523 04:09:34.387] May 23 03:59:44.506: INFO: Waiting for pod downwardapi-volume-cfa0496f-f7f2-401e-a416-1ed07223764d to disappear
I0523 04:09:34.387] May 23 03:59:44.508: INFO: Pod downwardapi-volume-cfa0496f-f7f2-401e-a416-1ed07223764d no longer exists
I0523 04:09:34.387] [AfterEach] [sig-storage] Projected downwardAPI
I0523 04:09:34.387]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.387] May 23 03:59:44.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.388] STEP: Destroying namespace "projected-9890" for this suite.
I0523 04:09:34.388] •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":249,"skipped":3998,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.388] SSSSSS
I0523 04:09:34.388] ------------------------------
I0523 04:09:34.388] [sig-storage] Projected downwardAPI 
I0523 04:09:34.388]   should provide container's cpu limit [NodeConformance] [Conformance]
I0523 04:09:34.388]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.389] [BeforeEach] [sig-storage] Projected downwardAPI
... skipping 11 lines ...
I0523 04:09:34.391] [BeforeEach] [sig-storage] Projected downwardAPI
I0523 04:09:34.391]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
I0523 04:09:34.391] [It] should provide container's cpu limit [NodeConformance] [Conformance]
I0523 04:09:34.391]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.391] I0523 03:59:44.640550      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.391] STEP: Creating a pod to test downward API volume plugin
I0523 04:09:34.392] May 23 03:59:44.646: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fb285556-b9bf-4013-8755-35f5e492b9c2" in namespace "projected-6596" to be "Succeeded or Failed"
I0523 04:09:34.392] May 23 03:59:44.648: INFO: Pod "downwardapi-volume-fb285556-b9bf-4013-8755-35f5e492b9c2": Phase="Pending", Reason="", readiness=false. Elapsed: 1.706937ms
I0523 04:09:34.392] May 23 03:59:46.651: INFO: Pod "downwardapi-volume-fb285556-b9bf-4013-8755-35f5e492b9c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004930033s
I0523 04:09:34.392] STEP: Saw pod success
I0523 04:09:34.392] May 23 03:59:46.651: INFO: Pod "downwardapi-volume-fb285556-b9bf-4013-8755-35f5e492b9c2" satisfied condition "Succeeded or Failed"
I0523 04:09:34.393] May 23 03:59:46.653: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-fb285556-b9bf-4013-8755-35f5e492b9c2 container client-container: <nil>
I0523 04:09:34.393] STEP: delete the pod
I0523 04:09:34.393] May 23 03:59:46.666: INFO: Waiting for pod downwardapi-volume-fb285556-b9bf-4013-8755-35f5e492b9c2 to disappear
I0523 04:09:34.393] May 23 03:59:46.669: INFO: Pod downwardapi-volume-fb285556-b9bf-4013-8755-35f5e492b9c2 no longer exists
I0523 04:09:34.393] [AfterEach] [sig-storage] Projected downwardAPI
I0523 04:09:34.393]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.393] May 23 03:59:46.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.394] STEP: Destroying namespace "projected-6596" for this suite.
I0523 04:09:34.394] •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":292,"completed":250,"skipped":4004,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.394] SSSSSS
I0523 04:09:34.394] ------------------------------
I0523 04:09:34.394] [k8s.io] Variable Expansion 
I0523 04:09:34.394]   should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]
I0523 04:09:34.394]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.395] [BeforeEach] [k8s.io] Variable Expansion
... skipping 8 lines ...
I0523 04:09:34.396] STEP: Waiting for a default service account to be provisioned in namespace
I0523 04:09:34.396] I0523 03:59:46.796674      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.396] I0523 03:59:46.796710      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.396] [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]
I0523 04:09:34.396]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.397] I0523 03:59:46.798910      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.397] STEP: creating the pod with failed condition
I0523 04:09:34.397] I0523 03:59:57.640472      17 reflector.go:514] k8s.io/kubernetes/test/e2e/node/taints.go:146: Watch close - *v1.Pod total 11 items received
I0523 04:09:34.397] STEP: updating the pod
I0523 04:09:34.397] May 23 04:01:47.318: INFO: Successfully updated pod "var-expansion-ecdadfad-ef90-48c4-a22b-4e4118e0a45c"
I0523 04:09:34.397] STEP: waiting for pod running
I0523 04:09:34.397] STEP: deleting the pod gracefully
I0523 04:09:34.398] May 23 04:01:49.324: INFO: Deleting pod "var-expansion-ecdadfad-ef90-48c4-a22b-4e4118e0a45c" in namespace "var-expansion-871"
... skipping 6 lines ...
I0523 04:09:34.399] • [SLOW TEST:160.664 seconds]
I0523 04:09:34.399] [k8s.io] Variable Expansion
I0523 04:09:34.399] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
I0523 04:09:34.399]   should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]
I0523 04:09:34.399]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.399] ------------------------------
I0523 04:09:34.400] {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":292,"completed":251,"skipped":4010,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.400] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:34.400] ------------------------------
I0523 04:09:34.400] [sig-storage] Secrets 
I0523 04:09:34.400]   should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
I0523 04:09:34.401]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.401] [BeforeEach] [sig-storage] Secrets
... skipping 10 lines ...
I0523 04:09:34.402] I0523 04:02:27.463024      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.403] [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
I0523 04:09:34.403]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.403] I0523 04:02:27.464975      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.403] STEP: Creating secret with name secret-test-b6cbc1b3-088d-4a71-8f70-6c82703bce93
I0523 04:09:34.403] STEP: Creating a pod to test consume secrets
I0523 04:09:34.403] May 23 04:02:27.472: INFO: Waiting up to 5m0s for pod "pod-secrets-b8dd0f3d-0a37-4a50-b1c2-e1100171d2ab" in namespace "secrets-8016" to be "Succeeded or Failed"
I0523 04:09:34.404] May 23 04:02:27.474: INFO: Pod "pod-secrets-b8dd0f3d-0a37-4a50-b1c2-e1100171d2ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099043ms
I0523 04:09:34.404] May 23 04:02:29.476: INFO: Pod "pod-secrets-b8dd0f3d-0a37-4a50-b1c2-e1100171d2ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004310525s
I0523 04:09:34.404] STEP: Saw pod success
I0523 04:09:34.404] May 23 04:02:29.476: INFO: Pod "pod-secrets-b8dd0f3d-0a37-4a50-b1c2-e1100171d2ab" satisfied condition "Succeeded or Failed"
I0523 04:09:34.404] May 23 04:02:29.478: INFO: Trying to get logs from node kind-worker pod pod-secrets-b8dd0f3d-0a37-4a50-b1c2-e1100171d2ab container secret-volume-test: <nil>
I0523 04:09:34.404] STEP: delete the pod
I0523 04:09:34.405] May 23 04:02:29.498: INFO: Waiting for pod pod-secrets-b8dd0f3d-0a37-4a50-b1c2-e1100171d2ab to disappear
I0523 04:09:34.405] May 23 04:02:29.500: INFO: Pod pod-secrets-b8dd0f3d-0a37-4a50-b1c2-e1100171d2ab no longer exists
I0523 04:09:34.405] [AfterEach] [sig-storage] Secrets
I0523 04:09:34.405]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.405] May 23 04:02:29.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.405] STEP: Destroying namespace "secrets-8016" for this suite.
I0523 04:09:34.405] •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":292,"completed":252,"skipped":4042,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.405] SSSSSSS
I0523 04:09:34.406] ------------------------------
I0523 04:09:34.406] [sig-network] DNS 
I0523 04:09:34.406]   should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
I0523 04:09:34.406]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.406] [BeforeEach] [sig-network] DNS
... skipping 25 lines ...
I0523 04:09:34.411] STEP: deleting the pod
I0523 04:09:34.411] STEP: deleting the test headless service
I0523 04:09:34.411] [AfterEach] [sig-network] DNS
I0523 04:09:34.411]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.411] May 23 04:02:31.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.411] STEP: Destroying namespace "dns-2241" for this suite.
I0523 04:09:34.411] •{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":292,"completed":253,"skipped":4049,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.412] 
I0523 04:09:34.412] ------------------------------
I0523 04:09:34.412] [sig-apps] ReplicaSet 
I0523 04:09:34.412]   should adopt matching pods on creation and release no longer matching pods [Conformance]
I0523 04:09:34.412]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.412] [BeforeEach] [sig-apps] ReplicaSet
... skipping 18 lines ...
I0523 04:09:34.415] May 23 04:02:34.849: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
I0523 04:09:34.415] STEP: Then the pod is released
I0523 04:09:34.415] [AfterEach] [sig-apps] ReplicaSet
I0523 04:09:34.415]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.415] May 23 04:02:35.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.416] STEP: Destroying namespace "replicaset-879" for this suite.
I0523 04:09:34.416] •{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":292,"completed":254,"skipped":4049,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.416] SSSSSSSSSSSSSSS
I0523 04:09:34.416] ------------------------------
I0523 04:09:34.416] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
I0523 04:09:34.416]   listing custom resource definition objects works  [Conformance]
I0523 04:09:34.417]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.417] [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 22 lines ...
I0523 04:09:34.420] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:34.420]   Simple CustomResourceDefinition
I0523 04:09:34.421]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48
I0523 04:09:34.421]     listing custom resource definition objects works  [Conformance]
I0523 04:09:34.421]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.421] ------------------------------
I0523 04:09:34.421] {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":292,"completed":255,"skipped":4064,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.422] SSSSSSSSSSSSSSSSS
I0523 04:09:34.422] ------------------------------
I0523 04:09:34.422] [sig-cli] Kubectl client Guestbook application 
I0523 04:09:34.422]   should create and stop a working application  [Conformance]
I0523 04:09:34.422]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.422] [BeforeEach] [sig-cli] Kubectl client
... skipping 209 lines ...
I0523 04:09:34.447] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
I0523 04:09:34.447]   Guestbook application
I0523 04:09:34.447]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342
I0523 04:09:34.447]     should create and stop a working application  [Conformance]
I0523 04:09:34.447]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.447] ------------------------------
I0523 04:09:34.448] {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":292,"completed":256,"skipped":4081,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.448] SSSSSS
I0523 04:09:34.448] ------------------------------
I0523 04:09:34.448] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
I0523 04:09:34.448]   should mutate configmap [Conformance]
I0523 04:09:34.448]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.449] [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 35 lines ...
I0523 04:09:34.454] • [SLOW TEST:5.905 seconds]
I0523 04:09:34.455] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0523 04:09:34.455] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:34.455]   should mutate configmap [Conformance]
I0523 04:09:34.455]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.455] ------------------------------
I0523 04:09:34.455] {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":292,"completed":257,"skipped":4087,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.456] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:34.456] ------------------------------
I0523 04:09:34.456] [sig-api-machinery] ResourceQuota 
I0523 04:09:34.456]   should create a ResourceQuota and capture the life of a pod. [Conformance]
I0523 04:09:34.456]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.456] [BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 30 lines ...
I0523 04:09:34.461] • [SLOW TEST:13.183 seconds]
I0523 04:09:34.461] [sig-api-machinery] ResourceQuota
I0523 04:09:34.461] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:34.461]   should create a ResourceQuota and capture the life of a pod. [Conformance]
I0523 04:09:34.462]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.462] ------------------------------
I0523 04:09:34.462] {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":292,"completed":258,"skipped":4121,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.462] [k8s.io] Container Runtime blackbox test on terminated container 
I0523 04:09:34.462]   should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
I0523 04:09:34.463]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.463] [BeforeEach] [k8s.io] Container Runtime
I0523 04:09:34.463]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
I0523 04:09:34.463] STEP: Creating a kubernetes client
... skipping 17 lines ...
I0523 04:09:34.466] May 23 04:03:11.236: INFO: Expected: &{} to match Container's Termination Message:  --
I0523 04:09:34.466] STEP: delete the container
I0523 04:09:34.466] [AfterEach] [k8s.io] Container Runtime
I0523 04:09:34.466]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.466] May 23 04:03:11.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.466] STEP: Destroying namespace "container-runtime-6498" for this suite.
I0523 04:09:34.467] •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":292,"completed":259,"skipped":4121,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.467] SSSSSSSSSSSSSS
I0523 04:09:34.467] ------------------------------
I0523 04:09:34.467] [sig-api-machinery] ResourceQuota 
I0523 04:09:34.467]   should create a ResourceQuota and capture the life of a secret. [Conformance]
I0523 04:09:34.467]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.468] [BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 27 lines ...
I0523 04:09:34.472] • [SLOW TEST:17.164 seconds]
I0523 04:09:34.472] [sig-api-machinery] ResourceQuota
I0523 04:09:34.472] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:34.472]   should create a ResourceQuota and capture the life of a secret. [Conformance]
I0523 04:09:34.473]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.473] ------------------------------
I0523 04:09:34.473] {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":292,"completed":260,"skipped":4135,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.473] SSSSSSSSSSSSSSSSS
I0523 04:09:34.473] ------------------------------
I0523 04:09:34.473] [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
I0523 04:09:34.474]   should perform rolling updates and roll backs of template modifications [Conformance]
I0523 04:09:34.474]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.474] [BeforeEach] [sig-apps] StatefulSet
... skipping 75 lines ...
I0523 04:09:34.486] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0523 04:09:34.486]   [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
I0523 04:09:34.486]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
I0523 04:09:34.486]     should perform rolling updates and roll backs of template modifications [Conformance]
I0523 04:09:34.486]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.486] ------------------------------
I0523 04:09:34.487] {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":292,"completed":261,"skipped":4152,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.487] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:34.487] ------------------------------
I0523 04:09:34.487] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
I0523 04:09:34.487]   works for CRD without validation schema [Conformance]
I0523 04:09:34.487]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.487] [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 30 lines ...
I0523 04:09:34.494] May 23 04:05:32.548: INFO: stderr: ""
I0523 04:09:34.494] May 23 04:05:32.548: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1875-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     <empty>\n"
I0523 04:09:34.494] [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
I0523 04:09:34.494]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.494] May 23 04:05:34.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.495] STEP: Destroying namespace "crd-publish-openapi-642" for this suite.
I0523 04:09:34.495] •{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":292,"completed":262,"skipped":4276,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.495] SSSSSSSSSSSSSS
I0523 04:09:34.495] ------------------------------
I0523 04:09:34.495] [sig-storage] Projected configMap 
I0523 04:09:34.495]   should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
I0523 04:09:34.496]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.496] [BeforeEach] [sig-storage] Projected configMap
... skipping 10 lines ...
I0523 04:09:34.498] I0523 04:05:34.478598      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.499] [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
I0523 04:09:34.499]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.499] STEP: Creating configMap with name projected-configmap-test-volume-map-fec0fdf0-6b34-4127-a919-96ddd3c82c3d
I0523 04:09:34.499] I0523 04:05:34.480971      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.499] STEP: Creating a pod to test consume configMaps
I0523 04:09:34.500] May 23 04:05:34.489: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d404241e-3ad1-435c-b5a4-16e0813cbb0d" in namespace "projected-1974" to be "Succeeded or Failed"
I0523 04:09:34.500] May 23 04:05:34.491: INFO: Pod "pod-projected-configmaps-d404241e-3ad1-435c-b5a4-16e0813cbb0d": Phase="Pending", Reason="", readiness=false. Elapsed: 1.793855ms
I0523 04:09:34.500] May 23 04:05:36.494: INFO: Pod "pod-projected-configmaps-d404241e-3ad1-435c-b5a4-16e0813cbb0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005044403s
I0523 04:09:34.501] May 23 04:05:38.497: INFO: Pod "pod-projected-configmaps-d404241e-3ad1-435c-b5a4-16e0813cbb0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008095978s
I0523 04:09:34.501] STEP: Saw pod success
I0523 04:09:34.501] May 23 04:05:38.497: INFO: Pod "pod-projected-configmaps-d404241e-3ad1-435c-b5a4-16e0813cbb0d" satisfied condition "Succeeded or Failed"
I0523 04:09:34.501] May 23 04:05:38.499: INFO: Trying to get logs from node kind-worker pod pod-projected-configmaps-d404241e-3ad1-435c-b5a4-16e0813cbb0d container projected-configmap-volume-test: <nil>
I0523 04:09:34.501] STEP: delete the pod
I0523 04:09:34.502] May 23 04:05:38.517: INFO: Waiting for pod pod-projected-configmaps-d404241e-3ad1-435c-b5a4-16e0813cbb0d to disappear
I0523 04:09:34.502] May 23 04:05:38.518: INFO: Pod pod-projected-configmaps-d404241e-3ad1-435c-b5a4-16e0813cbb0d no longer exists
I0523 04:09:34.502] [AfterEach] [sig-storage] Projected configMap
I0523 04:09:34.502]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.502] May 23 04:05:38.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.502] STEP: Destroying namespace "projected-1974" for this suite.
I0523 04:09:34.503] •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":292,"completed":263,"skipped":4290,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.503] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:34.503] ------------------------------
I0523 04:09:34.503] [sig-storage] Projected downwardAPI 
I0523 04:09:34.503]   should provide podname only [NodeConformance] [Conformance]
I0523 04:09:34.504]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.504] [BeforeEach] [sig-storage] Projected downwardAPI
... skipping 11 lines ...
I0523 04:09:34.505] [BeforeEach] [sig-storage] Projected downwardAPI
I0523 04:09:34.506]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
I0523 04:09:34.506] I0523 04:05:38.648221      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.506] [It] should provide podname only [NodeConformance] [Conformance]
I0523 04:09:34.506]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.506] STEP: Creating a pod to test downward API volume plugin
I0523 04:09:34.506] May 23 04:05:38.653: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e126293a-8d9d-4a7c-81f2-fe7f6d2120ac" in namespace "projected-2929" to be "Succeeded or Failed"
I0523 04:09:34.507] May 23 04:05:38.656: INFO: Pod "downwardapi-volume-e126293a-8d9d-4a7c-81f2-fe7f6d2120ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.88799ms
I0523 04:09:34.507] May 23 04:05:40.659: INFO: Pod "downwardapi-volume-e126293a-8d9d-4a7c-81f2-fe7f6d2120ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006256946s
I0523 04:09:34.507] I0523 04:05:42.184764      17 reflector.go:514] k8s.io/kubernetes/test/e2e/node/taints.go:146: Watch close - *v1.Pod total 7 items received
I0523 04:09:34.507] May 23 04:05:42.663: INFO: Pod "downwardapi-volume-e126293a-8d9d-4a7c-81f2-fe7f6d2120ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009683878s
I0523 04:09:34.507] STEP: Saw pod success
I0523 04:09:34.507] May 23 04:05:42.663: INFO: Pod "downwardapi-volume-e126293a-8d9d-4a7c-81f2-fe7f6d2120ac" satisfied condition "Succeeded or Failed"
I0523 04:09:34.508] May 23 04:05:42.665: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-e126293a-8d9d-4a7c-81f2-fe7f6d2120ac container client-container: <nil>
I0523 04:09:34.508] STEP: delete the pod
I0523 04:09:34.508] May 23 04:05:42.676: INFO: Waiting for pod downwardapi-volume-e126293a-8d9d-4a7c-81f2-fe7f6d2120ac to disappear
I0523 04:09:34.508] May 23 04:05:42.678: INFO: Pod downwardapi-volume-e126293a-8d9d-4a7c-81f2-fe7f6d2120ac no longer exists
I0523 04:09:34.508] [AfterEach] [sig-storage] Projected downwardAPI
I0523 04:09:34.508]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.509] May 23 04:05:42.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.509] STEP: Destroying namespace "projected-2929" for this suite.
I0523 04:09:34.509] •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":292,"completed":264,"skipped":4327,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.509] SSSSSSSSSSSSSS
I0523 04:09:34.509] ------------------------------
I0523 04:09:34.509] [sig-storage] Downward API volume 
I0523 04:09:34.510]   should provide container's memory limit [NodeConformance] [Conformance]
I0523 04:09:34.510]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.510] [BeforeEach] [sig-storage] Downward API volume
... skipping 11 lines ...
I0523 04:09:34.512] [BeforeEach] [sig-storage] Downward API volume
I0523 04:09:34.512]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
I0523 04:09:34.512] I0523 04:05:42.807605      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.513] [It] should provide container's memory limit [NodeConformance] [Conformance]
I0523 04:09:34.513]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.513] STEP: Creating a pod to test downward API volume plugin
I0523 04:09:34.513] May 23 04:05:42.812: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0b505f32-6dc8-427d-99dc-3536481591e9" in namespace "downward-api-4133" to be "Succeeded or Failed"
I0523 04:09:34.513] May 23 04:05:42.814: INFO: Pod "downwardapi-volume-0b505f32-6dc8-427d-99dc-3536481591e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025252ms
I0523 04:09:34.514] May 23 04:05:44.816: INFO: Pod "downwardapi-volume-0b505f32-6dc8-427d-99dc-3536481591e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00447359s
I0523 04:09:34.514] May 23 04:05:46.819: INFO: Pod "downwardapi-volume-0b505f32-6dc8-427d-99dc-3536481591e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007575461s
I0523 04:09:34.514] STEP: Saw pod success
I0523 04:09:34.514] May 23 04:05:46.819: INFO: Pod "downwardapi-volume-0b505f32-6dc8-427d-99dc-3536481591e9" satisfied condition "Succeeded or Failed"
I0523 04:09:34.514] May 23 04:05:46.821: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-0b505f32-6dc8-427d-99dc-3536481591e9 container client-container: <nil>
I0523 04:09:34.514] STEP: delete the pod
I0523 04:09:34.514] May 23 04:05:46.833: INFO: Waiting for pod downwardapi-volume-0b505f32-6dc8-427d-99dc-3536481591e9 to disappear
I0523 04:09:34.515] May 23 04:05:46.835: INFO: Pod downwardapi-volume-0b505f32-6dc8-427d-99dc-3536481591e9 no longer exists
I0523 04:09:34.515] [AfterEach] [sig-storage] Downward API volume
I0523 04:09:34.515]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.515] May 23 04:05:46.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.515] STEP: Destroying namespace "downward-api-4133" for this suite.
I0523 04:09:34.515] •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":292,"completed":265,"skipped":4341,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.515] SSSSSSSSSSS
I0523 04:09:34.516] ------------------------------
I0523 04:09:34.516] [sig-node] ConfigMap 
I0523 04:09:34.516]   should be consumable via the environment [NodeConformance] [Conformance]
I0523 04:09:34.516]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.516] [BeforeEach] [sig-node] ConfigMap
... skipping 10 lines ...
I0523 04:09:34.518] I0523 04:05:46.968197      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.518] [It] should be consumable via the environment [NodeConformance] [Conformance]
I0523 04:09:34.518]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.518] I0523 04:05:46.970378      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.518] STEP: Creating configMap configmap-4961/configmap-test-517c7bfa-3247-4b14-8fa3-66492336f878
I0523 04:09:34.519] STEP: Creating a pod to test consume configMaps
I0523 04:09:34.519] May 23 04:05:46.978: INFO: Waiting up to 5m0s for pod "pod-configmaps-d3e6aeb0-7f7b-4c75-9aeb-ec4e06543a13" in namespace "configmap-4961" to be "Succeeded or Failed"
I0523 04:09:34.519] May 23 04:05:46.979: INFO: Pod "pod-configmaps-d3e6aeb0-7f7b-4c75-9aeb-ec4e06543a13": Phase="Pending", Reason="", readiness=false. Elapsed: 1.924447ms
I0523 04:09:34.519] May 23 04:05:48.983: INFO: Pod "pod-configmaps-d3e6aeb0-7f7b-4c75-9aeb-ec4e06543a13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005328457s
I0523 04:09:34.519] STEP: Saw pod success
I0523 04:09:34.520] May 23 04:05:48.983: INFO: Pod "pod-configmaps-d3e6aeb0-7f7b-4c75-9aeb-ec4e06543a13" satisfied condition "Succeeded or Failed"
I0523 04:09:34.520] May 23 04:05:48.985: INFO: Trying to get logs from node kind-worker pod pod-configmaps-d3e6aeb0-7f7b-4c75-9aeb-ec4e06543a13 container env-test: <nil>
I0523 04:09:34.520] STEP: delete the pod
I0523 04:09:34.520] May 23 04:05:48.996: INFO: Waiting for pod pod-configmaps-d3e6aeb0-7f7b-4c75-9aeb-ec4e06543a13 to disappear
I0523 04:09:34.520] May 23 04:05:48.998: INFO: Pod pod-configmaps-d3e6aeb0-7f7b-4c75-9aeb-ec4e06543a13 no longer exists
I0523 04:09:34.520] [AfterEach] [sig-node] ConfigMap
I0523 04:09:34.521]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.521] May 23 04:05:48.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.521] STEP: Destroying namespace "configmap-4961" for this suite.
I0523 04:09:34.521] •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":292,"completed":266,"skipped":4352,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.521] SSSSSSSSSSSSSSS
I0523 04:09:34.521] ------------------------------
I0523 04:09:34.522] [sig-apps] Daemon set [Serial] 
I0523 04:09:34.522]   should retry creating failed daemon pods [Conformance]
I0523 04:09:34.522]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.522] [BeforeEach] [sig-apps] Daemon set [Serial]
I0523 04:09:34.522]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
I0523 04:09:34.522] STEP: Creating a kubernetes client
I0523 04:09:34.522] May 23 04:05:49.004: INFO: >>> kubeConfig: /tmp/kubeconfig-142995523
I0523 04:09:34.522] STEP: Building a namespace api object, basename daemonsets
... skipping 4 lines ...
I0523 04:09:34.523] STEP: Waiting for a default service account to be provisioned in namespace
I0523 04:09:34.523] I0523 04:05:49.129931      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.523] I0523 04:05:49.129952      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.524] [BeforeEach] [sig-apps] Daemon set [Serial]
I0523 04:09:34.524]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
I0523 04:09:34.524] I0523 04:05:49.132034      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.524] [It] should retry creating failed daemon pods [Conformance]
I0523 04:09:34.524]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.524] STEP: Creating a simple DaemonSet "daemon-set"
I0523 04:09:34.524] STEP: Check that daemon pods launch on every node of the cluster.
I0523 04:09:34.525] May 23 04:05:49.148: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
I0523 04:09:34.525] May 23 04:05:49.150: INFO: Number of nodes with available pods: 0
I0523 04:09:34.525] May 23 04:05:49.150: INFO: Node kind-worker is running more than one daemon pod
I0523 04:09:34.525] May 23 04:05:50.153: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
I0523 04:09:34.525] May 23 04:05:50.156: INFO: Number of nodes with available pods: 0
I0523 04:09:34.525] May 23 04:05:50.156: INFO: Node kind-worker is running more than one daemon pod
I0523 04:09:34.526] May 23 04:05:51.154: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
I0523 04:09:34.526] May 23 04:05:51.157: INFO: Number of nodes with available pods: 2
I0523 04:09:34.526] May 23 04:05:51.157: INFO: Number of running nodes: 2, number of available pods: 2
I0523 04:09:34.526] STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
I0523 04:09:34.526] May 23 04:05:51.167: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
I0523 04:09:34.526] May 23 04:05:51.172: INFO: Number of nodes with available pods: 1
I0523 04:09:34.527] May 23 04:05:51.172: INFO: Node kind-worker2 is running more than one daemon pod
I0523 04:09:34.527] May 23 04:05:52.175: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
I0523 04:09:34.527] May 23 04:05:52.178: INFO: Number of nodes with available pods: 1
I0523 04:09:34.527] May 23 04:05:52.178: INFO: Node kind-worker2 is running more than one daemon pod
I0523 04:09:34.527] May 23 04:05:53.175: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
I0523 04:09:34.527] May 23 04:05:53.178: INFO: Number of nodes with available pods: 2
I0523 04:09:34.527] May 23 04:05:53.178: INFO: Number of running nodes: 2, number of available pods: 2
I0523 04:09:34.527] STEP: Wait for the failed daemon pod to be completely deleted.
I0523 04:09:34.528] [AfterEach] [sig-apps] Daemon set [Serial]
I0523 04:09:34.528]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
I0523 04:09:34.528] STEP: Deleting DaemonSet "daemon-set"
I0523 04:09:34.528] STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1245, will wait for the garbage collector to delete the pods
I0523 04:09:34.528] I0523 04:05:53.183763      17 reflector.go:207] Starting reflector *v1.Pod (0s) from k8s.io/kubernetes/test/utils/pod_store.go:57
I0523 04:09:34.528] I0523 04:05:53.183787      17 reflector.go:243] Listing and watching *v1.Pod from k8s.io/kubernetes/test/utils/pod_store.go:57
... skipping 13 lines ...
I0523 04:09:34.531] May 23 04:06:06.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.531] STEP: Destroying namespace "daemonsets-1245" for this suite.
I0523 04:09:34.531] 
I0523 04:09:34.531] • [SLOW TEST:17.653 seconds]
I0523 04:09:34.531] [sig-apps] Daemon set [Serial]
I0523 04:09:34.531] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0523 04:09:34.531]   should retry creating failed daemon pods [Conformance]
I0523 04:09:34.531]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.532] ------------------------------
I0523 04:09:34.532] {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":292,"completed":267,"skipped":4367,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.532] SSS
I0523 04:09:34.532] ------------------------------
I0523 04:09:34.532] [sig-cli] Kubectl client Kubectl server-side dry-run 
I0523 04:09:34.532]   should check if kubectl can dry-run update Pods [Conformance]
I0523 04:09:34.532]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.533] [BeforeEach] [sig-cli] Kubectl client
... skipping 29 lines ...
I0523 04:09:34.543] May 23 04:06:08.904: INFO: stderr: ""
I0523 04:09:34.543] May 23 04:06:08.904: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
I0523 04:09:34.543] [AfterEach] [sig-cli] Kubectl client
I0523 04:09:34.543]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.543] May 23 04:06:08.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.543] STEP: Destroying namespace "kubectl-9950" for this suite.
I0523 04:09:34.544] •{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":292,"completed":268,"skipped":4370,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.544] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:34.544] ------------------------------
I0523 04:09:34.544] [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
I0523 04:09:34.544]   should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:34.544]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.545] [BeforeEach] [k8s.io] Security Context
... skipping 10 lines ...
I0523 04:09:34.546] I0523 04:06:09.038978      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.547] [BeforeEach] [k8s.io] Security Context
I0523 04:09:34.547]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
I0523 04:09:34.547] [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:34.547]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.547] I0523 04:06:09.041367      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.548] May 23 04:06:09.046: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-8d8980c0-0456-47c8-a769-308aa72689f9" in namespace "security-context-test-7964" to be "Succeeded or Failed"
I0523 04:09:34.548] May 23 04:06:09.048: INFO: Pod "alpine-nnp-false-8d8980c0-0456-47c8-a769-308aa72689f9": Phase="Pending", Reason="", readiness=false. Elapsed: 1.677676ms
I0523 04:09:34.548] May 23 04:06:11.052: INFO: Pod "alpine-nnp-false-8d8980c0-0456-47c8-a769-308aa72689f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005810548s
I0523 04:09:34.548] May 23 04:06:11.052: INFO: Pod "alpine-nnp-false-8d8980c0-0456-47c8-a769-308aa72689f9" satisfied condition "Succeeded or Failed"
I0523 04:09:34.548] [AfterEach] [k8s.io] Security Context
I0523 04:09:34.548]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.549] May 23 04:06:11.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.549] STEP: Destroying namespace "security-context-test-7964" for this suite.
I0523 04:09:34.549] •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":269,"skipped":4408,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.549] SSSSSSSSSSSSSSSSSSS
I0523 04:09:34.549] ------------------------------
I0523 04:09:34.549] [sig-storage] EmptyDir volumes 
I0523 04:09:34.550]   should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:34.550]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.550] [BeforeEach] [sig-storage] EmptyDir volumes
... skipping 9 lines ...
I0523 04:09:34.552] I0523 04:06:11.197554      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.552] I0523 04:06:11.197580      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.552] [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:34.552]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.553] I0523 04:06:11.199866      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.553] STEP: Creating a pod to test emptydir 0644 on node default medium
I0523 04:09:34.553] May 23 04:06:11.206: INFO: Waiting up to 5m0s for pod "pod-976fc7e8-ba8d-477e-aa26-e6742606d20b" in namespace "emptydir-5321" to be "Succeeded or Failed"
I0523 04:09:34.553] May 23 04:06:11.208: INFO: Pod "pod-976fc7e8-ba8d-477e-aa26-e6742606d20b": Phase="Pending", Reason="", readiness=false. Elapsed: 1.949325ms
I0523 04:09:34.553] May 23 04:06:13.211: INFO: Pod "pod-976fc7e8-ba8d-477e-aa26-e6742606d20b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004948647s
I0523 04:09:34.554] STEP: Saw pod success
I0523 04:09:34.554] May 23 04:06:13.211: INFO: Pod "pod-976fc7e8-ba8d-477e-aa26-e6742606d20b" satisfied condition "Succeeded or Failed"
I0523 04:09:34.554] May 23 04:06:13.213: INFO: Trying to get logs from node kind-worker pod pod-976fc7e8-ba8d-477e-aa26-e6742606d20b container test-container: <nil>
I0523 04:09:34.554] STEP: delete the pod
I0523 04:09:34.554] May 23 04:06:13.224: INFO: Waiting for pod pod-976fc7e8-ba8d-477e-aa26-e6742606d20b to disappear
I0523 04:09:34.554] May 23 04:06:13.226: INFO: Pod pod-976fc7e8-ba8d-477e-aa26-e6742606d20b no longer exists
I0523 04:09:34.555] [AfterEach] [sig-storage] EmptyDir volumes
I0523 04:09:34.555]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.555] May 23 04:06:13.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.555] STEP: Destroying namespace "emptydir-5321" for this suite.
I0523 04:09:34.555] •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":270,"skipped":4427,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.555] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:34.556] ------------------------------
I0523 04:09:34.556] [sig-network] Services 
I0523 04:09:34.556]   should serve multiport endpoints from pods  [Conformance]
I0523 04:09:34.556]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.556] [BeforeEach] [sig-network] Services
... skipping 12 lines ...
I0523 04:09:34.558]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:808
I0523 04:09:34.558] [It] should serve multiport endpoints from pods  [Conformance]
I0523 04:09:34.559]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.559] I0523 04:06:13.358759      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.559] STEP: creating service multi-endpoint-test in namespace services-3166
I0523 04:09:34.559] STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3166 to expose endpoints map[]
I0523 04:09:34.559] May 23 04:06:13.367: INFO: Get endpoints failed (2.325978ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
I0523 04:09:34.560] May 23 04:06:14.370: INFO: successfully validated that service multi-endpoint-test in namespace services-3166 exposes endpoints map[] (1.005367525s elapsed)
I0523 04:09:34.560] STEP: Creating pod pod1 in namespace services-3166
I0523 04:09:34.560] STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3166 to expose endpoints map[pod1:[100]]
I0523 04:09:34.560] May 23 04:06:16.395: INFO: successfully validated that service multi-endpoint-test in namespace services-3166 exposes endpoints map[pod1:[100]] (2.01953558s elapsed)
I0523 04:09:34.560] STEP: Creating pod pod2 in namespace services-3166
I0523 04:09:34.560] STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3166 to expose endpoints map[pod1:[100] pod2:[101]]
... skipping 15 lines ...
I0523 04:09:34.563] • [SLOW TEST:7.234 seconds]
I0523 04:09:34.563] [sig-network] Services
I0523 04:09:34.563] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
I0523 04:09:34.563]   should serve multiport endpoints from pods  [Conformance]
I0523 04:09:34.563]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.563] ------------------------------
I0523 04:09:34.563] {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":292,"completed":271,"skipped":4489,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.563] SSSSSSSSS
I0523 04:09:34.563] ------------------------------
I0523 04:09:34.564] [sig-network] Networking Granular Checks: Pods 
I0523 04:09:34.564]   should function for intra-pod communication: http [NodeConformance] [Conformance]
I0523 04:09:34.564]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.564] [BeforeEach] [sig-network] Networking
... skipping 45 lines ...
I0523 04:09:34.571] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
I0523 04:09:34.571]   Granular Checks: Pods
I0523 04:09:34.571]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
I0523 04:09:34.571]     should function for intra-pod communication: http [NodeConformance] [Conformance]
I0523 04:09:34.571]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.572] ------------------------------
I0523 04:09:34.572] {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":292,"completed":272,"skipped":4498,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.572] SSSSSSSSSSSSSSSSS
I0523 04:09:34.572] ------------------------------
I0523 04:09:34.572] [sig-apps] Deployment 
I0523 04:09:34.572]   deployment should delete old replica sets [Conformance]
I0523 04:09:34.572]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.573] [BeforeEach] [sig-apps] Deployment
... skipping 36 lines ...
I0523 04:09:34.587] • [SLOW TEST:5.175 seconds]
I0523 04:09:34.587] [sig-apps] Deployment
I0523 04:09:34.587] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0523 04:09:34.587]   deployment should delete old replica sets [Conformance]
I0523 04:09:34.588]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.588] ------------------------------
I0523 04:09:34.588] {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":292,"completed":273,"skipped":4515,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.588] SS
I0523 04:09:34.588] ------------------------------
I0523 04:09:34.588] [sig-api-machinery] ResourceQuota 
I0523 04:09:34.589]   should create a ResourceQuota and capture the life of a service. [Conformance]
I0523 04:09:34.589]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.589] [BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 26 lines ...
I0523 04:09:34.593] • [SLOW TEST:11.168 seconds]
I0523 04:09:34.593] [sig-api-machinery] ResourceQuota
I0523 04:09:34.593] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0523 04:09:34.593]   should create a ResourceQuota and capture the life of a service. [Conformance]
I0523 04:09:34.594]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.594] ------------------------------
I0523 04:09:34.594] {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":292,"completed":274,"skipped":4517,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.594] SSSS
I0523 04:09:34.594] ------------------------------
I0523 04:09:34.594] [sig-network] DNS 
I0523 04:09:34.594]   should support configurable pod DNS nameservers [Conformance]
I0523 04:09:34.595]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.595] [BeforeEach] [sig-network] DNS
... skipping 23 lines ...
I0523 04:09:34.601] May 23 04:07:03.451: INFO: >>> kubeConfig: /tmp/kubeconfig-142995523
I0523 04:09:34.601] May 23 04:07:03.546: INFO: Deleting pod dns-1281...
I0523 04:09:34.601] [AfterEach] [sig-network] DNS
I0523 04:09:34.602]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.602] May 23 04:07:03.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.602] STEP: Destroying namespace "dns-1281" for this suite.
I0523 04:09:34.602] •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":292,"completed":275,"skipped":4521,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.602] SSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:34.602] ------------------------------
I0523 04:09:34.602] [sig-storage] EmptyDir volumes 
I0523 04:09:34.603]   should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:34.603]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.603] [BeforeEach] [sig-storage] EmptyDir volumes
... skipping 9 lines ...
I0523 04:09:34.604] I0523 04:07:03.685937      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.604] I0523 04:07:03.685963      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.604] [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
I0523 04:09:34.605]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.605] I0523 04:07:03.688270      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.605] STEP: Creating a pod to test emptydir 0666 on tmpfs
I0523 04:09:34.605] May 23 04:07:03.695: INFO: Waiting up to 5m0s for pod "pod-7b97843f-fc55-4fd4-aebd-c3e1e5b2f3d0" in namespace "emptydir-1057" to be "Succeeded or Failed"
I0523 04:09:34.605] May 23 04:07:03.697: INFO: Pod "pod-7b97843f-fc55-4fd4-aebd-c3e1e5b2f3d0": Phase="Pending", Reason="", readiness=false. Elapsed: 1.99184ms
I0523 04:09:34.605] May 23 04:07:05.700: INFO: Pod "pod-7b97843f-fc55-4fd4-aebd-c3e1e5b2f3d0": Phase="Running", Reason="", readiness=true. Elapsed: 2.004902633s
I0523 04:09:34.606] May 23 04:07:07.702: INFO: Pod "pod-7b97843f-fc55-4fd4-aebd-c3e1e5b2f3d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007610759s
I0523 04:09:34.606] STEP: Saw pod success
I0523 04:09:34.606] May 23 04:07:07.702: INFO: Pod "pod-7b97843f-fc55-4fd4-aebd-c3e1e5b2f3d0" satisfied condition "Succeeded or Failed"
I0523 04:09:34.606] May 23 04:07:07.704: INFO: Trying to get logs from node kind-worker pod pod-7b97843f-fc55-4fd4-aebd-c3e1e5b2f3d0 container test-container: <nil>
I0523 04:09:34.606] STEP: delete the pod
I0523 04:09:34.606] May 23 04:07:07.717: INFO: Waiting for pod pod-7b97843f-fc55-4fd4-aebd-c3e1e5b2f3d0 to disappear
I0523 04:09:34.606] May 23 04:07:07.720: INFO: Pod pod-7b97843f-fc55-4fd4-aebd-c3e1e5b2f3d0 no longer exists
I0523 04:09:34.606] [AfterEach] [sig-storage] EmptyDir volumes
I0523 04:09:34.607]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.607] May 23 04:07:07.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.607] STEP: Destroying namespace "emptydir-1057" for this suite.
I0523 04:09:34.607] •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":276,"skipped":4544,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.607] S
I0523 04:09:34.607] ------------------------------
I0523 04:09:34.607] [sig-cli] Kubectl client Kubectl diff 
I0523 04:09:34.607]   should check if kubectl diff finds a difference for Deployments [Conformance]
I0523 04:09:34.608]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.608] [BeforeEach] [sig-cli] Kubectl client
... skipping 24 lines ...
I0523 04:09:34.612] May 23 04:07:08.562: INFO: stderr: ""
I0523 04:09:34.613] May 23 04:07:08.562: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n"
I0523 04:09:34.613] [AfterEach] [sig-cli] Kubectl client
I0523 04:09:34.613]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.613] May 23 04:07:08.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.613] STEP: Destroying namespace "kubectl-6738" for this suite.
I0523 04:09:34.614] •{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":292,"completed":277,"skipped":4545,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.614] SSSSSSS
I0523 04:09:34.614] ------------------------------
I0523 04:09:34.614] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
I0523 04:09:34.614]   should include custom resource definition resources in discovery documents [Conformance]
I0523 04:09:34.614]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.615] [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 19 lines ...
I0523 04:09:34.618] STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
I0523 04:09:34.618] STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
I0523 04:09:34.618] [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
I0523 04:09:34.619]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.619] May 23 04:07:08.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.619] STEP: Destroying namespace "custom-resource-definition-4050" for this suite.
I0523 04:09:34.619] •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":292,"completed":278,"skipped":4552,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.619] SSSSSSSSSSSS
I0523 04:09:34.620] ------------------------------
I0523 04:09:34.620] [sig-apps] Daemon set [Serial] 
I0523 04:09:34.620]   should run and stop simple daemon [Conformance]
I0523 04:09:34.620]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.620] [BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 107 lines ...
I0523 04:09:34.638] • [SLOW TEST:27.745 seconds]
I0523 04:09:34.639] [sig-apps] Daemon set [Serial]
I0523 04:09:34.639] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0523 04:09:34.639]   should run and stop simple daemon [Conformance]
I0523 04:09:34.639]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.639] ------------------------------
I0523 04:09:34.640] {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":292,"completed":279,"skipped":4564,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.640] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:34.640] ------------------------------
I0523 04:09:34.640] [sig-api-machinery] Secrets 
I0523 04:09:34.640]   should patch a secret [Conformance]
I0523 04:09:34.640]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.640] [BeforeEach] [sig-api-machinery] Secrets
... skipping 17 lines ...
I0523 04:09:34.643] STEP: deleting the secret using a LabelSelector
I0523 04:09:34.643] STEP: listing secrets in all namespaces, searching for label name and value in patch
I0523 04:09:34.643] [AfterEach] [sig-api-machinery] Secrets
I0523 04:09:34.644]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.644] May 23 04:07:36.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.644] STEP: Destroying namespace "secrets-4524" for this suite.
I0523 04:09:34.644] •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":292,"completed":280,"skipped":4632,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.644] SSSSS
I0523 04:09:34.644] ------------------------------
I0523 04:09:34.645] [sig-network] DNS 
I0523 04:09:34.645]   should provide DNS for services  [Conformance]
I0523 04:09:34.645]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.645] [BeforeEach] [sig-network] DNS
... skipping 24 lines ...
I0523 04:09:34.651] May 23 04:07:40.766: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6900.svc.cluster.local from pod dns-6900/dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9: the server could not find the requested resource (get pods dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9)
I0523 04:09:34.652] May 23 04:07:40.771: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6900.svc.cluster.local from pod dns-6900/dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9: the server could not find the requested resource (get pods dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9)
I0523 04:09:34.652] May 23 04:07:40.787: INFO: Unable to read jessie_udp@dns-test-service.dns-6900.svc.cluster.local from pod dns-6900/dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9: the server could not find the requested resource (get pods dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9)
I0523 04:09:34.652] May 23 04:07:40.789: INFO: Unable to read jessie_tcp@dns-test-service.dns-6900.svc.cluster.local from pod dns-6900/dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9: the server could not find the requested resource (get pods dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9)
I0523 04:09:34.653] May 23 04:07:40.791: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6900.svc.cluster.local from pod dns-6900/dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9: the server could not find the requested resource (get pods dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9)
I0523 04:09:34.653] May 23 04:07:40.793: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6900.svc.cluster.local from pod dns-6900/dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9: the server could not find the requested resource (get pods dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9)
I0523 04:09:34.653] May 23 04:07:40.807: INFO: Lookups using dns-6900/dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9 failed for: [wheezy_udp@dns-test-service.dns-6900.svc.cluster.local wheezy_tcp@dns-test-service.dns-6900.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6900.svc.cluster.local jessie_udp@dns-test-service.dns-6900.svc.cluster.local jessie_tcp@dns-test-service.dns-6900.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6900.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6900.svc.cluster.local]
I0523 04:09:34.653] 
I0523 04:09:34.654] May 23 04:07:45.810: INFO: Unable to read wheezy_udp@dns-test-service.dns-6900.svc.cluster.local from pod dns-6900/dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9: the server could not find the requested resource (get pods dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9)
I0523 04:09:34.654] May 23 04:07:45.813: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6900.svc.cluster.local from pod dns-6900/dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9: the server could not find the requested resource (get pods dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9)
I0523 04:09:34.654] May 23 04:07:45.834: INFO: Unable to read jessie_udp@dns-test-service.dns-6900.svc.cluster.local from pod dns-6900/dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9: the server could not find the requested resource (get pods dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9)
I0523 04:09:34.655] May 23 04:07:45.836: INFO: Unable to read jessie_tcp@dns-test-service.dns-6900.svc.cluster.local from pod dns-6900/dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9: the server could not find the requested resource (get pods dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9)
I0523 04:09:34.655] May 23 04:07:45.853: INFO: Lookups using dns-6900/dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9 failed for: [wheezy_udp@dns-test-service.dns-6900.svc.cluster.local wheezy_tcp@dns-test-service.dns-6900.svc.cluster.local jessie_udp@dns-test-service.dns-6900.svc.cluster.local jessie_tcp@dns-test-service.dns-6900.svc.cluster.local]
I0523 04:09:34.655] 
I0523 04:09:34.655] May 23 04:07:50.811: INFO: Unable to read wheezy_udp@dns-test-service.dns-6900.svc.cluster.local from pod dns-6900/dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9: the server could not find the requested resource (get pods dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9)
I0523 04:09:34.656] May 23 04:07:50.814: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6900.svc.cluster.local from pod dns-6900/dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9: the server could not find the requested resource (get pods dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9)
I0523 04:09:34.656] May 23 04:07:50.833: INFO: Unable to read jessie_udp@dns-test-service.dns-6900.svc.cluster.local from pod dns-6900/dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9: the server could not find the requested resource (get pods dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9)
I0523 04:09:34.656] May 23 04:07:50.835: INFO: Unable to read jessie_tcp@dns-test-service.dns-6900.svc.cluster.local from pod dns-6900/dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9: the server could not find the requested resource (get pods dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9)
I0523 04:09:34.656] May 23 04:07:50.850: INFO: Lookups using dns-6900/dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9 failed for: [wheezy_udp@dns-test-service.dns-6900.svc.cluster.local wheezy_tcp@dns-test-service.dns-6900.svc.cluster.local jessie_udp@dns-test-service.dns-6900.svc.cluster.local jessie_tcp@dns-test-service.dns-6900.svc.cluster.local]
I0523 04:09:34.657] 
I0523 04:09:34.657] May 23 04:07:55.810: INFO: Unable to read wheezy_udp@dns-test-service.dns-6900.svc.cluster.local from pod dns-6900/dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9: the server could not find the requested resource (get pods dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9)
I0523 04:09:34.657] May 23 04:07:55.813: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6900.svc.cluster.local from pod dns-6900/dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9: the server could not find the requested resource (get pods dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9)
I0523 04:09:34.657] May 23 04:07:55.835: INFO: Unable to read jessie_udp@dns-test-service.dns-6900.svc.cluster.local from pod dns-6900/dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9: the server could not find the requested resource (get pods dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9)
I0523 04:09:34.658] May 23 04:07:55.837: INFO: Unable to read jessie_tcp@dns-test-service.dns-6900.svc.cluster.local from pod dns-6900/dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9: the server could not find the requested resource (get pods dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9)
I0523 04:09:34.658] May 23 04:07:55.856: INFO: Lookups using dns-6900/dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9 failed for: [wheezy_udp@dns-test-service.dns-6900.svc.cluster.local wheezy_tcp@dns-test-service.dns-6900.svc.cluster.local jessie_udp@dns-test-service.dns-6900.svc.cluster.local jessie_tcp@dns-test-service.dns-6900.svc.cluster.local]
I0523 04:09:34.658] 
I0523 04:09:34.659] May 23 04:08:00.810: INFO: Unable to read wheezy_udp@dns-test-service.dns-6900.svc.cluster.local from pod dns-6900/dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9: the server could not find the requested resource (get pods dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9)
I0523 04:09:34.659] May 23 04:08:00.813: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6900.svc.cluster.local from pod dns-6900/dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9: the server could not find the requested resource (get pods dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9)
I0523 04:09:34.659] May 23 04:08:00.832: INFO: Unable to read jessie_udp@dns-test-service.dns-6900.svc.cluster.local from pod dns-6900/dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9: the server could not find the requested resource (get pods dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9)
I0523 04:09:34.659] May 23 04:08:00.834: INFO: Unable to read jessie_tcp@dns-test-service.dns-6900.svc.cluster.local from pod dns-6900/dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9: the server could not find the requested resource (get pods dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9)
I0523 04:09:34.660] May 23 04:08:00.851: INFO: Lookups using dns-6900/dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9 failed for: [wheezy_udp@dns-test-service.dns-6900.svc.cluster.local wheezy_tcp@dns-test-service.dns-6900.svc.cluster.local jessie_udp@dns-test-service.dns-6900.svc.cluster.local jessie_tcp@dns-test-service.dns-6900.svc.cluster.local]
I0523 04:09:34.660] 
I0523 04:09:34.660] May 23 04:08:05.810: INFO: Unable to read wheezy_udp@dns-test-service.dns-6900.svc.cluster.local from pod dns-6900/dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9: the server could not find the requested resource (get pods dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9)
I0523 04:09:34.660] May 23 04:08:05.813: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6900.svc.cluster.local from pod dns-6900/dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9: the server could not find the requested resource (get pods dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9)
I0523 04:09:34.661] May 23 04:08:05.832: INFO: Unable to read jessie_udp@dns-test-service.dns-6900.svc.cluster.local from pod dns-6900/dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9: the server could not find the requested resource (get pods dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9)
I0523 04:09:34.661] May 23 04:08:05.834: INFO: Unable to read jessie_tcp@dns-test-service.dns-6900.svc.cluster.local from pod dns-6900/dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9: the server could not find the requested resource (get pods dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9)
I0523 04:09:34.662] May 23 04:08:05.851: INFO: Lookups using dns-6900/dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9 failed for: [wheezy_udp@dns-test-service.dns-6900.svc.cluster.local wheezy_tcp@dns-test-service.dns-6900.svc.cluster.local jessie_udp@dns-test-service.dns-6900.svc.cluster.local jessie_tcp@dns-test-service.dns-6900.svc.cluster.local]
I0523 04:09:34.662] 
I0523 04:09:34.662] May 23 04:08:10.852: INFO: DNS probes using dns-6900/dns-test-ceb02e4c-5caf-497f-8b0a-a43ec37c84b9 succeeded
I0523 04:09:34.662] 
I0523 04:09:34.662] STEP: deleting the pod
I0523 04:09:34.662] STEP: deleting the test service
I0523 04:09:34.662] STEP: deleting the test headless service
... skipping 5 lines ...
I0523 04:09:34.663] • [SLOW TEST:34.309 seconds]
I0523 04:09:34.663] [sig-network] DNS
I0523 04:09:34.664] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
I0523 04:09:34.664]   should provide DNS for services  [Conformance]
I0523 04:09:34.664]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.664] ------------------------------
I0523 04:09:34.665] {"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":292,"completed":281,"skipped":4637,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.665] SSSSSSSSSSSSSSSSSSS
I0523 04:09:34.665] ------------------------------
I0523 04:09:34.665] [sig-network] Networking Granular Checks: Pods 
I0523 04:09:34.665]   should function for intra-pod communication: udp [NodeConformance] [Conformance]
I0523 04:09:34.665]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.666] [BeforeEach] [sig-network] Networking
... skipping 44 lines ...
I0523 04:09:34.675] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
I0523 04:09:34.675]   Granular Checks: Pods
I0523 04:09:34.676]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
I0523 04:09:34.676]     should function for intra-pod communication: udp [NodeConformance] [Conformance]
I0523 04:09:34.676]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.676] ------------------------------
I0523 04:09:34.676] {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":292,"completed":282,"skipped":4656,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.677] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:34.677] ------------------------------
I0523 04:09:34.677] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
I0523 04:09:34.677]   listing validating webhooks should work [Conformance]
I0523 04:09:34.677]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.678] [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 29 lines ...
I0523 04:09:34.683]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.683] May 23 04:08:39.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.683] STEP: Destroying namespace "webhook-2358" for this suite.
I0523 04:09:34.683] STEP: Destroying namespace "webhook-2358-markers" for this suite.
I0523 04:09:34.683] [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0523 04:09:34.683]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
I0523 04:09:34.684] •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":292,"completed":283,"skipped":4709,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.684] SSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:34.684] ------------------------------
I0523 04:09:34.684] [sig-network] Services 
I0523 04:09:34.684]   should have session affinity work for NodePort service [LinuxOnly] [Conformance]
I0523 04:09:34.684]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.684] [BeforeEach] [sig-network] Services
... skipping 77 lines ...
I0523 04:09:34.702] • [SLOW TEST:17.594 seconds]
I0523 04:09:34.702] [sig-network] Services
I0523 04:09:34.702] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
I0523 04:09:34.702]   should have session affinity work for NodePort service [LinuxOnly] [Conformance]
I0523 04:09:34.703]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.703] ------------------------------
I0523 04:09:34.703] {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":292,"completed":284,"skipped":4734,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.703] SSS
I0523 04:09:34.703] ------------------------------
I0523 04:09:34.703] [k8s.io] Container Runtime blackbox test on terminated container 
I0523 04:09:34.704]   should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
I0523 04:09:34.704]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.704] [BeforeEach] [k8s.io] Container Runtime
... skipping 19 lines ...
I0523 04:09:34.708] May 23 04:08:58.855: INFO: Expected: &{OK} to match Container's Termination Message: OK --
I0523 04:09:34.708] STEP: delete the container
I0523 04:09:34.709] [AfterEach] [k8s.io] Container Runtime
I0523 04:09:34.709]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.709] May 23 04:08:58.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.709] STEP: Destroying namespace "container-runtime-8990" for this suite.
I0523 04:09:34.710] •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":292,"completed":285,"skipped":4737,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.710] S
I0523 04:09:34.710] ------------------------------
I0523 04:09:34.710] [k8s.io] Variable Expansion 
I0523 04:09:34.710]   should allow substituting values in a container's command [NodeConformance] [Conformance]
I0523 04:09:34.710]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.711] [BeforeEach] [k8s.io] Variable Expansion
... skipping 9 lines ...
I0523 04:09:34.713] I0523 04:08:58.994888      17 reflector.go:207] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.713] I0523 04:08:58.994923      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.713] [It] should allow substituting values in a container's command [NodeConformance] [Conformance]
I0523 04:09:34.713]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.714] I0523 04:08:58.997365      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.714] STEP: Creating a pod to test substitution in container's command
I0523 04:09:34.714] May 23 04:08:59.002: INFO: Waiting up to 5m0s for pod "var-expansion-d107839b-5889-4e2c-9877-4c43e94201a6" in namespace "var-expansion-1332" to be "Succeeded or Failed"
I0523 04:09:34.714] May 23 04:08:59.003: INFO: Pod "var-expansion-d107839b-5889-4e2c-9877-4c43e94201a6": Phase="Pending", Reason="", readiness=false. Elapsed: 1.674841ms
I0523 04:09:34.715] May 23 04:09:01.007: INFO: Pod "var-expansion-d107839b-5889-4e2c-9877-4c43e94201a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004748951s
I0523 04:09:34.715] STEP: Saw pod success
I0523 04:09:34.715] May 23 04:09:01.007: INFO: Pod "var-expansion-d107839b-5889-4e2c-9877-4c43e94201a6" satisfied condition "Succeeded or Failed"
I0523 04:09:34.715] May 23 04:09:01.009: INFO: Trying to get logs from node kind-worker pod var-expansion-d107839b-5889-4e2c-9877-4c43e94201a6 container dapi-container: <nil>
I0523 04:09:34.715] STEP: delete the pod
I0523 04:09:34.715] May 23 04:09:01.029: INFO: Waiting for pod var-expansion-d107839b-5889-4e2c-9877-4c43e94201a6 to disappear
I0523 04:09:34.716] May 23 04:09:01.031: INFO: Pod var-expansion-d107839b-5889-4e2c-9877-4c43e94201a6 no longer exists
I0523 04:09:34.716] [AfterEach] [k8s.io] Variable Expansion
I0523 04:09:34.716]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.716] May 23 04:09:01.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.716] STEP: Destroying namespace "var-expansion-1332" for this suite.
I0523 04:09:34.717] •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":292,"completed":286,"skipped":4738,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.717] SSSSSSSSSSSSS
I0523 04:09:34.717] ------------------------------
I0523 04:09:34.717] [sig-scheduling] SchedulerPredicates [Serial] 
I0523 04:09:34.717]   validates that NodeSelector is respected if matching  [Conformance]
I0523 04:09:34.718]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.718] [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 41 lines ...
I0523 04:09:34.726] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
I0523 04:09:34.726]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.727] May 23 04:09:09.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.727] STEP: Destroying namespace "sched-pred-6588" for this suite.
I0523 04:09:34.727] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
I0523 04:09:34.727]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
I0523 04:09:34.727] I0523 04:09:09.228121      17 request.go:821] Error in request: resource name may not be empty
I0523 04:09:34.727] 
I0523 04:09:34.728] • [SLOW TEST:8.192 seconds]
I0523 04:09:34.728] [sig-scheduling] SchedulerPredicates [Serial]
I0523 04:09:34.728] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
I0523 04:09:34.728]   validates that NodeSelector is respected if matching  [Conformance]
I0523 04:09:34.728]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.728] ------------------------------
I0523 04:09:34.729] {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":292,"completed":287,"skipped":4751,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.729] SSSSSSSSS
I0523 04:09:34.729] ------------------------------
I0523 04:09:34.729] [sig-node] PodTemplates 
I0523 04:09:34.729]   should run the lifecycle of PodTemplates [Conformance]
I0523 04:09:34.730]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.730] [BeforeEach] [sig-node] PodTemplates
... skipping 12 lines ...
I0523 04:09:34.733]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.733] I0523 04:09:09.355449      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.733] [AfterEach] [sig-node] PodTemplates
I0523 04:09:34.733]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.733] May 23 04:09:09.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.734] STEP: Destroying namespace "podtemplate-9548" for this suite.
I0523 04:09:34.734] •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":292,"completed":288,"skipped":4760,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.734] SSSSSSSSSSSSSS
I0523 04:09:34.734] ------------------------------
I0523 04:09:34.734] [sig-storage] Secrets 
I0523 04:09:34.734]   should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
I0523 04:09:34.735]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.735] [BeforeEach] [sig-storage] Secrets
... skipping 10 lines ...
I0523 04:09:34.737] I0523 04:09:09.502386      17 reflector.go:243] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.737] [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
I0523 04:09:34.737]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.738] I0523 04:09:09.504429      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.738] STEP: Creating secret with name secret-test-map-cb443298-ef40-488b-957c-619b7e856e38
I0523 04:09:34.738] STEP: Creating a pod to test consume secrets
I0523 04:09:34.738] May 23 04:09:09.511: INFO: Waiting up to 5m0s for pod "pod-secrets-77c4afbb-a0ca-4fa4-931d-0bc447b22acc" in namespace "secrets-5064" to be "Succeeded or Failed"
I0523 04:09:34.739] May 23 04:09:09.513: INFO: Pod "pod-secrets-77c4afbb-a0ca-4fa4-931d-0bc447b22acc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057094ms
I0523 04:09:34.739] May 23 04:09:11.515: INFO: Pod "pod-secrets-77c4afbb-a0ca-4fa4-931d-0bc447b22acc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004545975s
I0523 04:09:34.739] STEP: Saw pod success
I0523 04:09:34.739] May 23 04:09:11.515: INFO: Pod "pod-secrets-77c4afbb-a0ca-4fa4-931d-0bc447b22acc" satisfied condition "Succeeded or Failed"
I0523 04:09:34.740] May 23 04:09:11.517: INFO: Trying to get logs from node kind-worker pod pod-secrets-77c4afbb-a0ca-4fa4-931d-0bc447b22acc container secret-volume-test: <nil>
I0523 04:09:34.740] STEP: delete the pod
I0523 04:09:34.740] May 23 04:09:11.528: INFO: Waiting for pod pod-secrets-77c4afbb-a0ca-4fa4-931d-0bc447b22acc to disappear
I0523 04:09:34.740] May 23 04:09:11.530: INFO: Pod pod-secrets-77c4afbb-a0ca-4fa4-931d-0bc447b22acc no longer exists
I0523 04:09:34.740] [AfterEach] [sig-storage] Secrets
I0523 04:09:34.741]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.741] May 23 04:09:11.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.741] STEP: Destroying namespace "secrets-5064" for this suite.
I0523 04:09:34.741] •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":292,"completed":289,"skipped":4774,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.742] SSSSSSSSSSSSSSSSSSSSSSSSSSS
I0523 04:09:34.742] ------------------------------
I0523 04:09:34.742] [sig-network] Services 
I0523 04:09:34.742]   should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
I0523 04:09:34.742]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.742] [BeforeEach] [sig-network] Services
... skipping 71 lines ...
I0523 04:09:34.759] • [SLOW TEST:15.181 seconds]
I0523 04:09:34.759] [sig-network] Services
I0523 04:09:34.759] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
I0523 04:09:34.760]   should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
I0523 04:09:34.760]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.760] ------------------------------
I0523 04:09:34.760] {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":292,"completed":290,"skipped":4801,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.760] SSSSS
I0523 04:09:34.761] ------------------------------
I0523 04:09:34.761] [sig-storage] Projected combined 
I0523 04:09:34.761]   should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
I0523 04:09:34.761]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.761] [BeforeEach] [sig-storage] Projected combined
... skipping 11 lines ...
I0523 04:09:34.763] [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
I0523 04:09:34.763]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
I0523 04:09:34.764] I0523 04:09:26.850014      17 reflector.go:213] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0523 04:09:34.764] STEP: Creating configMap with name configmap-projected-all-test-volume-5f64fdf5-04b9-4706-b5a0-769450be06e5
I0523 04:09:34.764] STEP: Creating secret with name secret-projected-all-test-volume-798c7609-aa4c-423c-a301-17ccda94a12a
I0523 04:09:34.764] STEP: Creating a pod to test Check all projections for projected volume plugin
I0523 04:09:34.764] May 23 04:09:26.861: INFO: Waiting up to 5m0s for pod "projected-volume-1a1011c1-6b57-41ed-b4c1-1e040c4bd9c9" in namespace "projected-1383" to be "Succeeded or Failed"
I0523 04:09:34.765] May 23 04:09:26.863: INFO: Pod "projected-volume-1a1011c1-6b57-41ed-b4c1-1e040c4bd9c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.510553ms
I0523 04:09:34.765] May 23 04:09:28.867: INFO: Pod "projected-volume-1a1011c1-6b57-41ed-b4c1-1e040c4bd9c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005920855s
I0523 04:09:34.765] STEP: Saw pod success
I0523 04:09:34.765] May 23 04:09:28.867: INFO: Pod "projected-volume-1a1011c1-6b57-41ed-b4c1-1e040c4bd9c9" satisfied condition "Succeeded or Failed"
I0523 04:09:34.765] May 23 04:09:28.869: INFO: Trying to get logs from node kind-worker pod projected-volume-1a1011c1-6b57-41ed-b4c1-1e040c4bd9c9 container projected-all-volume-test: <nil>
I0523 04:09:34.766] STEP: delete the pod
I0523 04:09:34.766] May 23 04:09:28.881: INFO: Waiting for pod projected-volume-1a1011c1-6b57-41ed-b4c1-1e040c4bd9c9 to disappear
I0523 04:09:34.766] May 23 04:09:28.883: INFO: Pod projected-volume-1a1011c1-6b57-41ed-b4c1-1e040c4bd9c9 no longer exists
I0523 04:09:34.766] [AfterEach] [sig-storage] Projected combined
I0523 04:09:34.766]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
I0523 04:09:34.766] May 23 04:09:28.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0523 04:09:34.767] STEP: Destroying namespace "projected-1383" for this suite.
I0523 04:09:34.767] •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":292,"completed":291,"skipped":4806,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.767] May 23 04:09:28.889: INFO: Running AfterSuite actions on all nodes
I0523 04:09:34.767] May 23 04:09:28.889: INFO: Running AfterSuite actions on node 1
I0523 04:09:34.767] May 23 04:09:28.889: INFO: Skipping dumping logs from cluster
I0523 04:09:34.767] 
I0523 04:09:34.768] JUnit report was created: /tmp/results/junit_01.xml
I0523 04:09:34.768] {"msg":"Test Suite completed","total":292,"completed":291,"skipped":4806,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}
I0523 04:09:34.768] 
I0523 04:09:34.768] 
I0523 04:09:34.768] Summarizing 1 Failure:
I0523 04:09:34.768] 
I0523 04:09:34.768] [Fail] [sig-cli] Kubectl client Kubectl logs [It] should be able to retrieve and filter logs  [Conformance] 
I0523 04:09:34.769] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1439
I0523 04:09:34.769] 
I0523 04:09:34.769] Ran 292 of 5098 Specs in 5079.455 seconds
I0523 04:09:34.769] FAIL! -- 291 Passed | 1 Failed | 0 Pending | 4806 Skipped
I0523 04:09:34.769] --- FAIL: TestE2E (5079.50s)
I0523 04:09:34.769] FAIL
I0523 04:09:34.769] 
I0523 04:09:34.769] Ginkgo ran 1 suite in 1h24m40.71989824s
I0523 04:09:34.769] Test Suite Failed
I0523 04:09:34.770] + ret=1
I0523 04:09:34.770] + set +x
W0523 04:09:34.870] + cleanup
W0523 04:09:34.870] + kind export logs /workspace/_artifacts/logs
I0523 04:09:37.661] /workspace/_artifacts/logs
W0523 04:09:37.762] Exported logs for cluster "kind" to:
... skipping 10 lines ...
W0523 04:09:46.698]     check(*cmd)
W0523 04:09:46.698]   File "/workspace/./test-infra/jenkins/../scenarios/execute.py", line 30, in check
W0523 04:09:46.698]     subprocess.check_call(cmd)
W0523 04:09:46.698]   File "/usr/lib/python2.7/subprocess.py", line 190, in check_call
W0523 04:09:46.698]     raise CalledProcessError(retcode, cmd)
W0523 04:09:46.699] subprocess.CalledProcessError: Command '('bash', '-c', 'cd ./../../k8s.io/kubernetes && source ./../test-infra/experiment/kind-conformance-image-e2e.sh')' returned non-zero exit status 1
E0523 04:09:46.709] Command failed
I0523 04:09:46.710] process 486 exited with code 1 after 94.8m
E0523 04:09:46.710] FAIL: ci-kubernetes-conformance-image-test
I0523 04:09:46.711] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0523 04:09:47.304] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0523 04:09:47.363] process 285868 exited with code 0 after 0.0m
I0523 04:09:47.363] Call:  gcloud config get-value account
I0523 04:09:47.794] process 285882 exited with code 0 after 0.0m
I0523 04:09:47.794] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0523 04:09:47.794] Upload result and artifacts...
I0523 04:09:47.794] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-kubernetes-conformance-image-test/1264021560298049538
I0523 04:09:47.795] Call:  gsutil ls gs://kubernetes-jenkins/logs/ci-kubernetes-conformance-image-test/1264021560298049538/artifacts
W0523 04:09:49.051] CommandException: One or more URLs matched no objects.
E0523 04:09:49.179] Command failed
I0523 04:09:49.179] process 285896 exited with code 1 after 0.0m
W0523 04:09:49.179] Remote dir gs://kubernetes-jenkins/logs/ci-kubernetes-conformance-image-test/1264021560298049538/artifacts not exist yet
I0523 04:09:49.180] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-conformance-image-test/1264021560298049538/artifacts
I0523 04:09:51.915] process 286042 exited with code 0 after 0.0m
W0523 04:09:51.916] metadata path /workspace/_artifacts/metadata.json does not exist
W0523 04:09:51.916] metadata not found or invalid, init with empty metadata
... skipping 15 lines ...