This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2020-06-01 16:44
Elapsed2h0m
Revisionmaster
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/ac82a4c1-88da-44f6-859a-64975bb2f9d5/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/ac82a4c1-88da-44f6-859a-64975bb2f9d5/targets/test

No Test Failures!


Error lines from build-log.txt

... skipping 71 lines ...
Analyzing: 4 targets (124 packages loaded, 362 targets configured)
Analyzing: 4 targets (864 packages loaded, 8574 targets configured)
Analyzing: 4 targets (2268 packages loaded, 15447 targets configured)
Analyzing: 4 targets (2268 packages loaded, 15447 targets configured)
Analyzing: 4 targets (2269 packages loaded, 15447 targets configured)
Analyzing: 4 targets (2269 packages loaded, 15447 targets configured)
DEBUG: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/bazel_gazelle/internal/go_repository.bzl:184:13: org_golang_x_tools: gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go:2:43: expected 'package', found 'EOF'
gazelle: found packages lib (issue27856.go) and server (issue29198.go) in /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/internal/gccgoimporter/testdata
gazelle: found packages issue25301 (issue25301.go) and a (a.go) in /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/internal/gcimporter/testdata
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go:1:34: expected 'package', found 'EOF'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go:1:16: expected ';', found '.'
gazelle: finding module path for import domain.name/importdecl: exit status 1: can't load package: package domain.name/importdecl: cannot find module providing package domain.name/importdecl
gazelle: finding module path for import old.com/one: exit status 1: can't load package: package old.com/one: cannot find module providing package old.com/one
gazelle: finding module path for import titanic.biz/bar: exit status 1: can't load package: package titanic.biz/bar: cannot find module providing package titanic.biz/bar
gazelle: finding module path for import titanic.biz/foo: exit status 1: can't load package: package titanic.biz/foo: cannot find module providing package titanic.biz/foo
gazelle: finding module path for import fruit.io/pear: exit status 1: can't load package: package fruit.io/pear: cannot find module providing package fruit.io/pear
gazelle: finding module path for import fruit.io/banana: exit status 1: can't load package: package fruit.io/banana: cannot find module providing package fruit.io/banana
... skipping 158 lines ...
WARNING: Waiting for server process to terminate (waited 10 seconds, waiting at most 60)
WARNING: Waiting for server process to terminate (waited 30 seconds, waiting at most 60)
INFO: Waited 60 seconds for server process (pid=5771) to terminate.
WARNING: Waiting for server process to terminate (waited 5 seconds, waiting at most 10)
WARNING: Waiting for server process to terminate (waited 10 seconds, waiting at most 10)
INFO: Waited 10 seconds for server process (pid=5771) to terminate.
FATAL: Attempted to kill stale server process (pid=5771) using SIGKILL, but it did not die in a timely fashion.
+ true
+ pkill ^bazel
+ true
+ mkdir -p _output/bin/
+ cp bazel-bin/test/e2e/e2e.test _output/bin/
+ find /home/prow/go/src/k8s.io/kubernetes/bazel-bin/ -name kubectl -type f
... skipping 46 lines ...
localAPIEndpoint:
  advertiseAddress: 172.18.0.3
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.3
---
apiVersion: kubeadm.k8s.io/v1beta2
discovery:
  bootstrapToken:
    apiServerEndpoint: kind-control-plane:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.3
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
... skipping 32 lines ...
localAPIEndpoint:
  advertiseAddress: 172.18.0.4
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.4
---
apiVersion: kubeadm.k8s.io/v1beta2
controlPlane:
  localAPIEndpoint:
    advertiseAddress: 172.18.0.4
... skipping 4 lines ...
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.4
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
... skipping 32 lines ...
localAPIEndpoint:
  advertiseAddress: 172.18.0.2
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.2
---
apiVersion: kubeadm.k8s.io/v1beta2
discovery:
  bootstrapToken:
    apiServerEndpoint: kind-control-plane:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.2
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
... skipping 37 lines ...
I0601 16:59:49.213404     293 checks.go:376] validating the presence of executable ebtables
I0601 16:59:49.213471     293 checks.go:376] validating the presence of executable ethtool
I0601 16:59:49.213500     293 checks.go:376] validating the presence of executable socat
I0601 16:59:49.213563     293 checks.go:376] validating the presence of executable tc
I0601 16:59:49.213688     293 checks.go:376] validating the presence of executable touch
I0601 16:59:49.213743     293 checks.go:520] running all checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_PIDS: enabled
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0601 16:59:49.252938     293 checks.go:406] checking whether the given node name is reachable using net.LookupHost
I0601 16:59:49.308077     293 checks.go:618] validating kubelet version
I0601 16:59:49.872924     293 checks.go:128] validating if the "kubelet" service is enabled and active
I0601 16:59:49.935598     293 checks.go:201] validating availability of port 10250
I0601 16:59:49.935721     293 checks.go:201] validating availability of port 2379
I0601 16:59:49.935754     293 checks.go:201] validating availability of port 2380
... skipping 105 lines ...
I0601 17:00:15.783057     293 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 242 milliseconds
I0601 17:00:16.136569     293 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 130 milliseconds
I0601 17:00:16.701484     293 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 167 milliseconds
I0601 17:00:17.113405     293 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 92 milliseconds
I0601 17:00:17.596785     293 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 79 milliseconds
I0601 17:00:28.010283     293 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 10000 milliseconds
I0601 17:00:35.375003     293 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 6866 milliseconds
I0601 17:00:35.640535     293 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 130 milliseconds
I0601 17:00:36.022257     293 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 11 milliseconds
I0601 17:00:36.521757     293 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 13 milliseconds
I0601 17:00:37.018604     293 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 12 milliseconds
I0601 17:00:37.526665     293 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 20 milliseconds
I0601 17:00:38.013174     293 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 6 milliseconds
[kubelet-check] Initial timeout of 40s passed.
I0601 17:00:38.517804     293 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 9 milliseconds
I0601 17:00:39.013375     293 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 7 milliseconds
[apiclient] All control plane components are healthy after 41.092689 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0601 17:00:39.536164     293 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 200 OK in 28 milliseconds
I0601 17:00:39.536453     293 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
I0601 17:00:39.562299     293 round_trippers.go:443] POST https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 22 milliseconds
I0601 17:00:39.585126     293 round_trippers.go:443] POST https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 19 milliseconds
... skipping 108 lines ...
I0601 17:01:11.638633     777 checks.go:376] validating the presence of executable ebtables
I0601 17:01:11.638670     777 checks.go:376] validating the presence of executable ethtool
I0601 17:01:11.638696     777 checks.go:376] validating the presence of executable socat
I0601 17:01:11.638731     777 checks.go:376] validating the presence of executable tc
I0601 17:01:11.638754     777 checks.go:376] validating the presence of executable touch
I0601 17:01:11.638791     777 checks.go:520] running all checks
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0601 17:01:11.684556     777 checks.go:406] checking whether the given node name is reachable using net.LookupHost
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
... skipping 78 lines ...
I0601 17:01:11.725245     774 checks.go:376] validating the presence of executable ebtables
I0601 17:01:11.725292     774 checks.go:376] validating the presence of executable ethtool
I0601 17:01:11.725327     774 checks.go:376] validating the presence of executable socat
I0601 17:01:11.725370     774 checks.go:376] validating the presence of executable tc
I0601 17:01:11.725404     774 checks.go:376] validating the presence of executable touch
I0601 17:01:11.725448     774 checks.go:520] running all checks
[preflight] The system verification failed. Printing the output from the verification:
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0601 17:01:11.778825     774 checks.go:406] checking whether the given node name is reachable using net.LookupHost
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
... skipping 81 lines ...
+ + GINKGO_PID=12001
+ wait 12001
./hack/ginkgo-e2e.sh --provider=skeleton --num-nodes=2 --ginkgo.focus=\[Conformance\] --ginkgo.skip= --report-dir=/logs/artifacts --disable-log-dump=true
Conformance test: not doing test setup.
I0601 17:02:19.204730   12365 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
I0601 17:02:19.205661   12365 e2e.go:129] Starting e2e run "433bb595-0e10-407e-83bc-d28099d99a1e" on Ginkgo node 1
{"msg":"Test Suite starting","total":292,"completed":0,"skipped":0,"failed":0}
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1591030935 - Will randomize all specs
Will run 292 of 5101 specs

Jun  1 17:02:19.254: INFO: >>> kubeConfig: /root/.kube/kind-test-config
... skipping 20 lines ...
STEP: Building a namespace api object, basename emptydir
Jun  1 17:02:19.658: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on node default medium
Jun  1 17:02:19.707: INFO: Waiting up to 5m0s for pod "pod-0eaef0de-218e-4b60-88a9-6b4180f0cdd0" in namespace "emptydir-9808" to be "Succeeded or Failed"
Jun  1 17:02:19.735: INFO: Pod "pod-0eaef0de-218e-4b60-88a9-6b4180f0cdd0": Phase="Pending", Reason="", readiness=false. Elapsed: 26.471044ms
Jun  1 17:02:21.750: INFO: Pod "pod-0eaef0de-218e-4b60-88a9-6b4180f0cdd0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041468647s
Jun  1 17:02:23.761: INFO: Pod "pod-0eaef0de-218e-4b60-88a9-6b4180f0cdd0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052596121s
Jun  1 17:02:25.793: INFO: Pod "pod-0eaef0de-218e-4b60-88a9-6b4180f0cdd0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085028972s
Jun  1 17:02:27.816: INFO: Pod "pod-0eaef0de-218e-4b60-88a9-6b4180f0cdd0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.10758497s
Jun  1 17:02:29.836: INFO: Pod "pod-0eaef0de-218e-4b60-88a9-6b4180f0cdd0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.127533163s
Jun  1 17:02:31.852: INFO: Pod "pod-0eaef0de-218e-4b60-88a9-6b4180f0cdd0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.144118193s
STEP: Saw pod success
Jun  1 17:02:31.852: INFO: Pod "pod-0eaef0de-218e-4b60-88a9-6b4180f0cdd0" satisfied condition "Succeeded or Failed"
Jun  1 17:02:31.869: INFO: Trying to get logs from node kind-worker pod pod-0eaef0de-218e-4b60-88a9-6b4180f0cdd0 container test-container: <nil>
STEP: delete the pod
Jun  1 17:02:31.965: INFO: Waiting for pod pod-0eaef0de-218e-4b60-88a9-6b4180f0cdd0 to disappear
Jun  1 17:02:31.981: INFO: Pod pod-0eaef0de-218e-4b60-88a9-6b4180f0cdd0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 17:02:31.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9808" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":1,"skipped":1,"failed":0}
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-e0d4b1af-9838-42ef-9ecd-6df68ad1d821
STEP: Creating a pod to test consume configMaps
Jun  1 17:02:32.162: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dc677533-c086-49bb-9fab-bcab598449f8" in namespace "projected-3970" to be "Succeeded or Failed"
Jun  1 17:02:32.174: INFO: Pod "pod-projected-configmaps-dc677533-c086-49bb-9fab-bcab598449f8": Phase="Pending", Reason="", readiness=false. Elapsed: 11.801347ms
Jun  1 17:02:34.189: INFO: Pod "pod-projected-configmaps-dc677533-c086-49bb-9fab-bcab598449f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026929261s
Jun  1 17:02:36.209: INFO: Pod "pod-projected-configmaps-dc677533-c086-49bb-9fab-bcab598449f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04687172s
Jun  1 17:02:38.232: INFO: Pod "pod-projected-configmaps-dc677533-c086-49bb-9fab-bcab598449f8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069002272s
Jun  1 17:02:40.245: INFO: Pod "pod-projected-configmaps-dc677533-c086-49bb-9fab-bcab598449f8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082444701s
Jun  1 17:02:42.258: INFO: Pod "pod-projected-configmaps-dc677533-c086-49bb-9fab-bcab598449f8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.095624383s
Jun  1 17:02:44.279: INFO: Pod "pod-projected-configmaps-dc677533-c086-49bb-9fab-bcab598449f8": Phase="Running", Reason="", readiness=true. Elapsed: 12.116453318s
Jun  1 17:02:46.294: INFO: Pod "pod-projected-configmaps-dc677533-c086-49bb-9fab-bcab598449f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.131394629s
STEP: Saw pod success
Jun  1 17:02:46.294: INFO: Pod "pod-projected-configmaps-dc677533-c086-49bb-9fab-bcab598449f8" satisfied condition "Succeeded or Failed"
Jun  1 17:02:46.316: INFO: Trying to get logs from node kind-worker2 pod pod-projected-configmaps-dc677533-c086-49bb-9fab-bcab598449f8 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 17:02:46.394: INFO: Waiting for pod pod-projected-configmaps-dc677533-c086-49bb-9fab-bcab598449f8 to disappear
Jun  1 17:02:46.408: INFO: Pod pod-projected-configmaps-dc677533-c086-49bb-9fab-bcab598449f8 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Jun  1 17:02:46.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3970" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":292,"completed":2,"skipped":3,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 68 lines ...
Jun  1 17:06:22.376: INFO: Waiting for statefulset status.replicas updated to 0
Jun  1 17:06:22.406: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Jun  1 17:06:22.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1582" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":292,"completed":3,"skipped":20,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Jun  1 17:06:28.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5044" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":4,"skipped":55,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-d6d5436c-ba26-409d-bb6b-ea8382619130
STEP: Creating a pod to test consume configMaps
Jun  1 17:06:28.988: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-11c235eb-00fe-4dae-9747-a73dd8daef2c" in namespace "projected-1640" to be "Succeeded or Failed"
Jun  1 17:06:29.008: INFO: Pod "pod-projected-configmaps-11c235eb-00fe-4dae-9747-a73dd8daef2c": Phase="Pending", Reason="", readiness=false. Elapsed: 19.972153ms
Jun  1 17:06:31.042: INFO: Pod "pod-projected-configmaps-11c235eb-00fe-4dae-9747-a73dd8daef2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053518139s
Jun  1 17:06:33.053: INFO: Pod "pod-projected-configmaps-11c235eb-00fe-4dae-9747-a73dd8daef2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064513241s
STEP: Saw pod success
Jun  1 17:06:33.053: INFO: Pod "pod-projected-configmaps-11c235eb-00fe-4dae-9747-a73dd8daef2c" satisfied condition "Succeeded or Failed"
Jun  1 17:06:33.070: INFO: Trying to get logs from node kind-worker pod pod-projected-configmaps-11c235eb-00fe-4dae-9747-a73dd8daef2c container projected-configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 17:06:33.129: INFO: Waiting for pod pod-projected-configmaps-11c235eb-00fe-4dae-9747-a73dd8daef2c to disappear
Jun  1 17:06:33.140: INFO: Pod pod-projected-configmaps-11c235eb-00fe-4dae-9747-a73dd8daef2c no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Jun  1 17:06:33.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1640" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":292,"completed":5,"skipped":70,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 17:06:33.339: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f134d131-579a-4254-91cb-812315eb1b04" in namespace "projected-9435" to be "Succeeded or Failed"
Jun  1 17:06:33.352: INFO: Pod "downwardapi-volume-f134d131-579a-4254-91cb-812315eb1b04": Phase="Pending", Reason="", readiness=false. Elapsed: 11.996017ms
Jun  1 17:06:35.360: INFO: Pod "downwardapi-volume-f134d131-579a-4254-91cb-812315eb1b04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020331875s
Jun  1 17:06:37.377: INFO: Pod "downwardapi-volume-f134d131-579a-4254-91cb-812315eb1b04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037306363s
STEP: Saw pod success
Jun  1 17:06:37.379: INFO: Pod "downwardapi-volume-f134d131-579a-4254-91cb-812315eb1b04" satisfied condition "Succeeded or Failed"
Jun  1 17:06:37.399: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-f134d131-579a-4254-91cb-812315eb1b04 container client-container: <nil>
STEP: delete the pod
Jun  1 17:06:37.489: INFO: Waiting for pod downwardapi-volume-f134d131-579a-4254-91cb-812315eb1b04 to disappear
Jun  1 17:06:37.505: INFO: Pod downwardapi-volume-f134d131-579a-4254-91cb-812315eb1b04 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Jun  1 17:06:37.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9435" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":292,"completed":6,"skipped":87,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Jun  1 17:06:37.552: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Jun  1 17:06:37.737: INFO: Waiting up to 5m0s for pod "downward-api-c472f810-39ac-4d92-b903-a8017dc6cfd0" in namespace "downward-api-8799" to be "Succeeded or Failed"
Jun  1 17:06:37.745: INFO: Pod "downward-api-c472f810-39ac-4d92-b903-a8017dc6cfd0": Phase="Pending", Reason="", readiness=false. Elapsed: 7.800417ms
Jun  1 17:06:39.785: INFO: Pod "downward-api-c472f810-39ac-4d92-b903-a8017dc6cfd0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048007727s
Jun  1 17:06:41.819: INFO: Pod "downward-api-c472f810-39ac-4d92-b903-a8017dc6cfd0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082106668s
Jun  1 17:06:43.848: INFO: Pod "downward-api-c472f810-39ac-4d92-b903-a8017dc6cfd0": Phase="Running", Reason="", readiness=true. Elapsed: 6.110833156s
Jun  1 17:06:45.863: INFO: Pod "downward-api-c472f810-39ac-4d92-b903-a8017dc6cfd0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.125588545s
STEP: Saw pod success
Jun  1 17:06:45.863: INFO: Pod "downward-api-c472f810-39ac-4d92-b903-a8017dc6cfd0" satisfied condition "Succeeded or Failed"
Jun  1 17:06:45.881: INFO: Trying to get logs from node kind-worker pod downward-api-c472f810-39ac-4d92-b903-a8017dc6cfd0 container dapi-container: <nil>
STEP: delete the pod
Jun  1 17:06:45.958: INFO: Waiting for pod downward-api-c472f810-39ac-4d92-b903-a8017dc6cfd0 to disappear
Jun  1 17:06:45.977: INFO: Pod downward-api-c472f810-39ac-4d92-b903-a8017dc6cfd0 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Jun  1 17:06:45.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8799" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":292,"completed":7,"skipped":98,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] 
  evicts pods with minTolerationSeconds [Disruptive] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
... skipping 19 lines ...
Jun  1 17:08:21.033: INFO: Noticed Pod "taint-eviction-b2" gets evicted.
STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute
[AfterEach] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
  test/e2e/framework/framework.go:175
Jun  1 17:08:21.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "taint-multiple-pods-6991" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","total":292,"completed":8,"skipped":127,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 5 lines ...
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
Jun  1 17:08:25.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-556" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":292,"completed":9,"skipped":145,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 20 lines ...
Jun  1 17:08:47.510: INFO: The status of Pod test-webserver-f5721306-4dfc-4fc0-be72-4263801b9132 is Running (Ready = true)
Jun  1 17:08:47.518: INFO: Container started at 2020-06-01 17:08:28 +0000 UTC, pod became ready at 2020-06-01 17:08:46 +0000 UTC
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Jun  1 17:08:47.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1090" for this suite.
•{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":292,"completed":10,"skipped":173,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-01b588a2-e281-4ec1-826d-ec22ad7e479d
STEP: Creating a pod to test consume secrets
Jun  1 17:08:47.754: INFO: Waiting up to 5m0s for pod "pod-secrets-65188435-4dac-47a4-acc3-0af2ddcfcfba" in namespace "secrets-4184" to be "Succeeded or Failed"
Jun  1 17:08:47.785: INFO: Pod "pod-secrets-65188435-4dac-47a4-acc3-0af2ddcfcfba": Phase="Pending", Reason="", readiness=false. Elapsed: 30.683012ms
Jun  1 17:08:49.809: INFO: Pod "pod-secrets-65188435-4dac-47a4-acc3-0af2ddcfcfba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054520123s
Jun  1 17:08:51.837: INFO: Pod "pod-secrets-65188435-4dac-47a4-acc3-0af2ddcfcfba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082364738s
STEP: Saw pod success
Jun  1 17:08:51.837: INFO: Pod "pod-secrets-65188435-4dac-47a4-acc3-0af2ddcfcfba" satisfied condition "Succeeded or Failed"
Jun  1 17:08:51.853: INFO: Trying to get logs from node kind-worker2 pod pod-secrets-65188435-4dac-47a4-acc3-0af2ddcfcfba container secret-env-test: <nil>
STEP: delete the pod
Jun  1 17:08:51.980: INFO: Waiting for pod pod-secrets-65188435-4dac-47a4-acc3-0af2ddcfcfba to disappear
Jun  1 17:08:51.990: INFO: Pod pod-secrets-65188435-4dac-47a4-acc3-0af2ddcfcfba no longer exists
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
Jun  1 17:08:51.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4184" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":292,"completed":11,"skipped":197,"failed":0}
SSS
------------------------------
[sig-api-machinery] Secrets 
  should patch a secret [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
... skipping 10 lines ...
STEP: deleting the secret using a LabelSelector
STEP: listing secrets in all namespaces, searching for label name and value in patch
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
Jun  1 17:08:52.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3056" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":292,"completed":12,"skipped":200,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 39 lines ...
Jun  1 17:09:12.020: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config explain e2e-test-crd-publish-openapi-9592-crds.spec'
Jun  1 17:09:13.727: INFO: stderr: ""
Jun  1 17:09:13.728: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9592-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Jun  1 17:09:13.728: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config explain e2e-test-crd-publish-openapi-9592-crds.spec.bars'
Jun  1 17:09:15.393: INFO: stderr: ""
Jun  1 17:09:15.393: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9592-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Jun  1 17:09:15.394: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config explain e2e-test-crd-publish-openapi-9592-crds.spec.bars2'
Jun  1 17:09:17.248: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 17:09:21.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7270" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":292,"completed":13,"skipped":205,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 24 lines ...
  test/e2e/framework/framework.go:175
Jun  1 17:09:28.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-513" for this suite.
STEP: Destroying namespace "webhook-513-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":292,"completed":14,"skipped":218,"failed":0}
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should print the output to logs [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Jun  1 17:09:35.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6542" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":292,"completed":15,"skipped":220,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-configmap-vgx5
STEP: Creating a pod to test atomic-volume-subpath
Jun  1 17:09:35.751: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-vgx5" in namespace "subpath-4125" to be "Succeeded or Failed"
Jun  1 17:09:35.768: INFO: Pod "pod-subpath-test-configmap-vgx5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.976986ms
Jun  1 17:09:37.787: INFO: Pod "pod-subpath-test-configmap-vgx5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036114125s
Jun  1 17:09:39.800: INFO: Pod "pod-subpath-test-configmap-vgx5": Phase="Running", Reason="", readiness=true. Elapsed: 4.048368019s
Jun  1 17:09:41.826: INFO: Pod "pod-subpath-test-configmap-vgx5": Phase="Running", Reason="", readiness=true. Elapsed: 6.074317719s
Jun  1 17:09:43.844: INFO: Pod "pod-subpath-test-configmap-vgx5": Phase="Running", Reason="", readiness=true. Elapsed: 8.092499222s
Jun  1 17:09:45.859: INFO: Pod "pod-subpath-test-configmap-vgx5": Phase="Running", Reason="", readiness=true. Elapsed: 10.108016186s
... skipping 2 lines ...
Jun  1 17:09:51.920: INFO: Pod "pod-subpath-test-configmap-vgx5": Phase="Running", Reason="", readiness=true. Elapsed: 16.168506265s
Jun  1 17:09:53.928: INFO: Pod "pod-subpath-test-configmap-vgx5": Phase="Running", Reason="", readiness=true. Elapsed: 18.17655819s
Jun  1 17:09:55.940: INFO: Pod "pod-subpath-test-configmap-vgx5": Phase="Running", Reason="", readiness=true. Elapsed: 20.188399819s
Jun  1 17:09:57.960: INFO: Pod "pod-subpath-test-configmap-vgx5": Phase="Running", Reason="", readiness=true. Elapsed: 22.208750391s
Jun  1 17:09:59.978: INFO: Pod "pod-subpath-test-configmap-vgx5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.226392416s
STEP: Saw pod success
Jun  1 17:09:59.979: INFO: Pod "pod-subpath-test-configmap-vgx5" satisfied condition "Succeeded or Failed"
Jun  1 17:10:00.004: INFO: Trying to get logs from node kind-worker2 pod pod-subpath-test-configmap-vgx5 container test-container-subpath-configmap-vgx5: <nil>
STEP: delete the pod
Jun  1 17:10:00.094: INFO: Waiting for pod pod-subpath-test-configmap-vgx5 to disappear
Jun  1 17:10:00.105: INFO: Pod pod-subpath-test-configmap-vgx5 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-vgx5
Jun  1 17:10:00.105: INFO: Deleting pod "pod-subpath-test-configmap-vgx5" in namespace "subpath-4125"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Jun  1 17:10:00.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4125" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":292,"completed":16,"skipped":240,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
Jun  1 17:10:00.297: INFO: >>> kubeConfig: /root/.kube/kind-test-config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 17:10:07.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3689" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":292,"completed":17,"skipped":261,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 5 lines ...
[BeforeEach] [sig-network] Services
  test/e2e/network/service.go:809
[It] should serve a basic endpoint from pods  [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating service endpoint-test2 in namespace services-4527
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4527 to expose endpoints map[]
Jun  1 17:10:07.422: INFO: Get endpoints failed (14.172477ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jun  1 17:10:08.440: INFO: successfully validated that service endpoint-test2 in namespace services-4527 exposes endpoints map[] (1.032360964s elapsed)
STEP: Creating pod pod1 in namespace services-4527
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4527 to expose endpoints map[pod1:[80]]
Jun  1 17:10:12.690: INFO: successfully validated that service endpoint-test2 in namespace services-4527 exposes endpoints map[pod1:[80]] (4.219880384s elapsed)
STEP: Creating pod pod2 in namespace services-4527
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4527 to expose endpoints map[pod1:[80] pod2:[80]]
... skipping 7 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 17:10:18.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4527" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":292,"completed":18,"skipped":273,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
Jun  1 17:10:29.685: INFO: stderr: ""
Jun  1 17:10:29.685: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8751-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     <empty>\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 17:10:33.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4958" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":292,"completed":19,"skipped":292,"failed":0}
SSSSSSS
------------------------------
[sig-network] Services 
  should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 66 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 17:11:01.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9099" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":292,"completed":20,"skipped":299,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 11 lines ...
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Jun  1 17:11:01.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9144" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":292,"completed":21,"skipped":320,"failed":0}

------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 26 lines ...
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jun  1 17:11:28.204: INFO: File wheezy_udp@dns-test-service-3.dns-1824.svc.cluster.local from pod  dns-1824/dns-test-a731142b-b7f1-4fbd-86b6-0751a4d34d64 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun  1 17:11:28.223: INFO: File jessie_udp@dns-test-service-3.dns-1824.svc.cluster.local from pod  dns-1824/dns-test-a731142b-b7f1-4fbd-86b6-0751a4d34d64 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun  1 17:11:28.224: INFO: Lookups using dns-1824/dns-test-a731142b-b7f1-4fbd-86b6-0751a4d34d64 failed for: [wheezy_udp@dns-test-service-3.dns-1824.svc.cluster.local jessie_udp@dns-test-service-3.dns-1824.svc.cluster.local]

Jun  1 17:11:33.249: INFO: File wheezy_udp@dns-test-service-3.dns-1824.svc.cluster.local from pod  dns-1824/dns-test-a731142b-b7f1-4fbd-86b6-0751a4d34d64 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun  1 17:11:33.268: INFO: File jessie_udp@dns-test-service-3.dns-1824.svc.cluster.local from pod  dns-1824/dns-test-a731142b-b7f1-4fbd-86b6-0751a4d34d64 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun  1 17:11:33.268: INFO: Lookups using dns-1824/dns-test-a731142b-b7f1-4fbd-86b6-0751a4d34d64 failed for: [wheezy_udp@dns-test-service-3.dns-1824.svc.cluster.local jessie_udp@dns-test-service-3.dns-1824.svc.cluster.local]

Jun  1 17:11:38.276: INFO: DNS probes using dns-test-a731142b-b7f1-4fbd-86b6-0751a4d34d64 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1824.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-1824.svc.cluster.local; sleep 1; done
... skipping 9 lines ...
STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Jun  1 17:11:44.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1824" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":292,"completed":22,"skipped":320,"failed":0}
SSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 29 lines ...
Jun  1 17:12:12.669: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jun  1 17:12:14.124: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
Jun  1 17:12:14.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8663" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":23,"skipped":327,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap configmap-6499/configmap-test-576f3e4e-77f6-49d3-9dc9-7d157e323914
STEP: Creating a pod to test consume configMaps
Jun  1 17:12:14.293: INFO: Waiting up to 5m0s for pod "pod-configmaps-4458cfb2-cc5a-489c-8478-49011f6d6d3f" in namespace "configmap-6499" to be "Succeeded or Failed"
Jun  1 17:12:14.304: INFO: Pod "pod-configmaps-4458cfb2-cc5a-489c-8478-49011f6d6d3f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.489283ms
Jun  1 17:12:16.316: INFO: Pod "pod-configmaps-4458cfb2-cc5a-489c-8478-49011f6d6d3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022776047s
Jun  1 17:12:18.329: INFO: Pod "pod-configmaps-4458cfb2-cc5a-489c-8478-49011f6d6d3f": Phase="Running", Reason="", readiness=true. Elapsed: 4.035951426s
Jun  1 17:12:20.353: INFO: Pod "pod-configmaps-4458cfb2-cc5a-489c-8478-49011f6d6d3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.059406713s
STEP: Saw pod success
Jun  1 17:12:20.353: INFO: Pod "pod-configmaps-4458cfb2-cc5a-489c-8478-49011f6d6d3f" satisfied condition "Succeeded or Failed"
Jun  1 17:12:20.369: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-4458cfb2-cc5a-489c-8478-49011f6d6d3f container env-test: <nil>
STEP: delete the pod
Jun  1 17:12:20.460: INFO: Waiting for pod pod-configmaps-4458cfb2-cc5a-489c-8478-49011f6d6d3f to disappear
Jun  1 17:12:20.469: INFO: Pod pod-configmaps-4458cfb2-cc5a-489c-8478-49011f6d6d3f no longer exists
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 17:12:20.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6499" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":292,"completed":24,"skipped":357,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 17:12:28.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-4815" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/crd_conversion_webhook.go:137
•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":292,"completed":25,"skipped":362,"failed":0}
SSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 27 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 17:12:44.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6854" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":292,"completed":26,"skipped":369,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
... skipping 7 lines ...
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  test/e2e/framework/framework.go:175
Jun  1 17:12:44.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-6971" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":292,"completed":27,"skipped":399,"failed":0}

------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Jun  1 17:12:56.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2019" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":292,"completed":28,"skipped":399,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 10 lines ...
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Jun  1 17:13:00.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7875" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":292,"completed":29,"skipped":408,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
Jun  1 17:13:00.728: INFO: >>> kubeConfig: /root/.kube/kind-test-config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 17:13:01.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7567" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":292,"completed":30,"skipped":448,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...
Jun  1 17:13:07.705: INFO: Unable to read wheezy_udp@dns-test-service.dns-5118.svc.cluster.local from pod dns-5118/dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8: the server could not find the requested resource (get pods dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8)
Jun  1 17:13:07.733: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5118.svc.cluster.local from pod dns-5118/dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8: the server could not find the requested resource (get pods dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8)
Jun  1 17:13:07.746: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5118.svc.cluster.local from pod dns-5118/dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8: the server could not find the requested resource (get pods dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8)
Jun  1 17:13:07.760: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5118.svc.cluster.local from pod dns-5118/dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8: the server could not find the requested resource (get pods dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8)
Jun  1 17:13:07.878: INFO: Unable to read jessie_udp@dns-test-service.dns-5118.svc.cluster.local from pod dns-5118/dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8: the server could not find the requested resource (get pods dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8)
Jun  1 17:13:07.900: INFO: Unable to read jessie_tcp@dns-test-service.dns-5118.svc.cluster.local from pod dns-5118/dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8: the server could not find the requested resource (get pods dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8)
Jun  1 17:13:08.064: INFO: Lookups using dns-5118/dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8 failed for: [wheezy_udp@dns-test-service.dns-5118.svc.cluster.local wheezy_tcp@dns-test-service.dns-5118.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5118.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5118.svc.cluster.local jessie_udp@dns-test-service.dns-5118.svc.cluster.local jessie_tcp@dns-test-service.dns-5118.svc.cluster.local]

Jun  1 17:13:13.092: INFO: Unable to read wheezy_udp@dns-test-service.dns-5118.svc.cluster.local from pod dns-5118/dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8: the server could not find the requested resource (get pods dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8)
Jun  1 17:13:13.128: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5118.svc.cluster.local from pod dns-5118/dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8: the server could not find the requested resource (get pods dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8)
Jun  1 17:13:13.302: INFO: Unable to read jessie_udp@dns-test-service.dns-5118.svc.cluster.local from pod dns-5118/dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8: the server could not find the requested resource (get pods dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8)
Jun  1 17:13:13.318: INFO: Unable to read jessie_tcp@dns-test-service.dns-5118.svc.cluster.local from pod dns-5118/dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8: the server could not find the requested resource (get pods dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8)
Jun  1 17:13:13.445: INFO: Lookups using dns-5118/dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8 failed for: [wheezy_udp@dns-test-service.dns-5118.svc.cluster.local wheezy_tcp@dns-test-service.dns-5118.svc.cluster.local jessie_udp@dns-test-service.dns-5118.svc.cluster.local jessie_tcp@dns-test-service.dns-5118.svc.cluster.local]

Jun  1 17:13:18.084: INFO: Unable to read wheezy_udp@dns-test-service.dns-5118.svc.cluster.local from pod dns-5118/dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8: the server could not find the requested resource (get pods dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8)
Jun  1 17:13:18.098: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5118.svc.cluster.local from pod dns-5118/dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8: the server could not find the requested resource (get pods dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8)
Jun  1 17:13:18.205: INFO: Unable to read jessie_udp@dns-test-service.dns-5118.svc.cluster.local from pod dns-5118/dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8: the server could not find the requested resource (get pods dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8)
Jun  1 17:13:18.218: INFO: Unable to read jessie_tcp@dns-test-service.dns-5118.svc.cluster.local from pod dns-5118/dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8: the server could not find the requested resource (get pods dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8)
Jun  1 17:13:18.336: INFO: Lookups using dns-5118/dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8 failed for: [wheezy_udp@dns-test-service.dns-5118.svc.cluster.local wheezy_tcp@dns-test-service.dns-5118.svc.cluster.local jessie_udp@dns-test-service.dns-5118.svc.cluster.local jessie_tcp@dns-test-service.dns-5118.svc.cluster.local]

Jun  1 17:13:23.077: INFO: Unable to read wheezy_udp@dns-test-service.dns-5118.svc.cluster.local from pod dns-5118/dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8: the server could not find the requested resource (get pods dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8)
Jun  1 17:13:23.097: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5118.svc.cluster.local from pod dns-5118/dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8: the server could not find the requested resource (get pods dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8)
Jun  1 17:13:23.256: INFO: Unable to read jessie_udp@dns-test-service.dns-5118.svc.cluster.local from pod dns-5118/dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8: the server could not find the requested resource (get pods dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8)
Jun  1 17:13:23.276: INFO: Unable to read jessie_tcp@dns-test-service.dns-5118.svc.cluster.local from pod dns-5118/dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8: the server could not find the requested resource (get pods dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8)
Jun  1 17:13:23.416: INFO: Lookups using dns-5118/dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8 failed for: [wheezy_udp@dns-test-service.dns-5118.svc.cluster.local wheezy_tcp@dns-test-service.dns-5118.svc.cluster.local jessie_udp@dns-test-service.dns-5118.svc.cluster.local jessie_tcp@dns-test-service.dns-5118.svc.cluster.local]

Jun  1 17:13:28.115: INFO: Unable to read wheezy_udp@dns-test-service.dns-5118.svc.cluster.local from pod dns-5118/dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8: the server could not find the requested resource (get pods dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8)
Jun  1 17:13:28.128: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5118.svc.cluster.local from pod dns-5118/dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8: the server could not find the requested resource (get pods dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8)
Jun  1 17:13:28.268: INFO: Unable to read jessie_udp@dns-test-service.dns-5118.svc.cluster.local from pod dns-5118/dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8: the server could not find the requested resource (get pods dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8)
Jun  1 17:13:28.277: INFO: Unable to read jessie_tcp@dns-test-service.dns-5118.svc.cluster.local from pod dns-5118/dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8: the server could not find the requested resource (get pods dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8)
Jun  1 17:13:28.352: INFO: Lookups using dns-5118/dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8 failed for: [wheezy_udp@dns-test-service.dns-5118.svc.cluster.local wheezy_tcp@dns-test-service.dns-5118.svc.cluster.local jessie_udp@dns-test-service.dns-5118.svc.cluster.local jessie_tcp@dns-test-service.dns-5118.svc.cluster.local]

Jun  1 17:13:33.073: INFO: Unable to read wheezy_udp@dns-test-service.dns-5118.svc.cluster.local from pod dns-5118/dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8: the server could not find the requested resource (get pods dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8)
Jun  1 17:13:33.089: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5118.svc.cluster.local from pod dns-5118/dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8: the server could not find the requested resource (get pods dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8)
Jun  1 17:13:33.177: INFO: Unable to read jessie_udp@dns-test-service.dns-5118.svc.cluster.local from pod dns-5118/dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8: the server could not find the requested resource (get pods dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8)
Jun  1 17:13:33.189: INFO: Unable to read jessie_tcp@dns-test-service.dns-5118.svc.cluster.local from pod dns-5118/dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8: the server could not find the requested resource (get pods dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8)
Jun  1 17:13:33.286: INFO: Lookups using dns-5118/dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8 failed for: [wheezy_udp@dns-test-service.dns-5118.svc.cluster.local wheezy_tcp@dns-test-service.dns-5118.svc.cluster.local jessie_udp@dns-test-service.dns-5118.svc.cluster.local jessie_tcp@dns-test-service.dns-5118.svc.cluster.local]

Jun  1 17:13:38.085: INFO: Unable to read wheezy_udp@dns-test-service.dns-5118.svc.cluster.local from pod dns-5118/dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8: the server could not find the requested resource (get pods dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8)
Jun  1 17:13:38.104: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5118.svc.cluster.local from pod dns-5118/dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8: the server could not find the requested resource (get pods dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8)
Jun  1 17:13:38.404: INFO: Lookups using dns-5118/dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8 failed for: [wheezy_udp@dns-test-service.dns-5118.svc.cluster.local wheezy_tcp@dns-test-service.dns-5118.svc.cluster.local]

Jun  1 17:13:43.444: INFO: DNS probes using dns-5118/dns-test-d55cbb4d-0841-4169-8a02-8be852edd9f8 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Jun  1 17:13:43.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5118" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":292,"completed":31,"skipped":476,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Jun  1 17:13:43.800: INFO: Asynchronously running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config proxy --unix-socket=/tmp/kubectl-proxy-unix427061338/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 17:13:44.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4758" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":292,"completed":32,"skipped":511,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 17:13:44.361: INFO: Waiting up to 5m0s for pod "downwardapi-volume-34665725-65e5-43de-be12-680b50d6c20b" in namespace "downward-api-1745" to be "Succeeded or Failed"
Jun  1 17:13:44.386: INFO: Pod "downwardapi-volume-34665725-65e5-43de-be12-680b50d6c20b": Phase="Pending", Reason="", readiness=false. Elapsed: 24.515452ms
Jun  1 17:13:46.409: INFO: Pod "downwardapi-volume-34665725-65e5-43de-be12-680b50d6c20b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04810971s
Jun  1 17:13:48.425: INFO: Pod "downwardapi-volume-34665725-65e5-43de-be12-680b50d6c20b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063979576s
STEP: Saw pod success
Jun  1 17:13:48.425: INFO: Pod "downwardapi-volume-34665725-65e5-43de-be12-680b50d6c20b" satisfied condition "Succeeded or Failed"
Jun  1 17:13:48.448: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-34665725-65e5-43de-be12-680b50d6c20b container client-container: <nil>
STEP: delete the pod
Jun  1 17:13:48.541: INFO: Waiting for pod downwardapi-volume-34665725-65e5-43de-be12-680b50d6c20b to disappear
Jun  1 17:13:48.556: INFO: Pod downwardapi-volume-34665725-65e5-43de-be12-680b50d6c20b no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Jun  1 17:13:48.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1745" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":292,"completed":33,"skipped":529,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 17:13:48.743: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a940f4b0-5e25-4d16-b60b-e838ff6cafcd" in namespace "projected-6517" to be "Succeeded or Failed"
Jun  1 17:13:48.763: INFO: Pod "downwardapi-volume-a940f4b0-5e25-4d16-b60b-e838ff6cafcd": Phase="Pending", Reason="", readiness=false. Elapsed: 19.635872ms
Jun  1 17:13:50.777: INFO: Pod "downwardapi-volume-a940f4b0-5e25-4d16-b60b-e838ff6cafcd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033423748s
Jun  1 17:13:52.805: INFO: Pod "downwardapi-volume-a940f4b0-5e25-4d16-b60b-e838ff6cafcd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061661787s
STEP: Saw pod success
Jun  1 17:13:52.806: INFO: Pod "downwardapi-volume-a940f4b0-5e25-4d16-b60b-e838ff6cafcd" satisfied condition "Succeeded or Failed"
Jun  1 17:13:52.821: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-a940f4b0-5e25-4d16-b60b-e838ff6cafcd container client-container: <nil>
STEP: delete the pod
Jun  1 17:13:52.954: INFO: Waiting for pod downwardapi-volume-a940f4b0-5e25-4d16-b60b-e838ff6cafcd to disappear
Jun  1 17:13:52.962: INFO: Pod downwardapi-volume-a940f4b0-5e25-4d16-b60b-e838ff6cafcd no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Jun  1 17:13:52.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6517" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":292,"completed":34,"skipped":566,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should patch a Namespace [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 9 lines ...
STEP: get the Namespace and ensuring it has the label
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  test/e2e/framework/framework.go:175
Jun  1 17:13:53.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-7061" for this suite.
STEP: Destroying namespace "nspatchtest-9efe5b38-7904-4859-95e8-67a61de3d885-6617" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":292,"completed":35,"skipped":581,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-map-74ed3f44-99bb-4204-bb08-3b30c4ab93d5
STEP: Creating a pod to test consume secrets
Jun  1 17:13:53.585: INFO: Waiting up to 5m0s for pod "pod-secrets-c604314d-dc88-49de-928a-043858c4bfe4" in namespace "secrets-302" to be "Succeeded or Failed"
Jun  1 17:13:53.600: INFO: Pod "pod-secrets-c604314d-dc88-49de-928a-043858c4bfe4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.769845ms
Jun  1 17:13:55.618: INFO: Pod "pod-secrets-c604314d-dc88-49de-928a-043858c4bfe4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032850609s
Jun  1 17:13:57.631: INFO: Pod "pod-secrets-c604314d-dc88-49de-928a-043858c4bfe4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046510922s
STEP: Saw pod success
Jun  1 17:13:57.631: INFO: Pod "pod-secrets-c604314d-dc88-49de-928a-043858c4bfe4" satisfied condition "Succeeded or Failed"
Jun  1 17:13:57.645: INFO: Trying to get logs from node kind-worker pod pod-secrets-c604314d-dc88-49de-928a-043858c4bfe4 container secret-volume-test: <nil>
STEP: delete the pod
Jun  1 17:13:57.715: INFO: Waiting for pod pod-secrets-c604314d-dc88-49de-928a-043858c4bfe4 to disappear
Jun  1 17:13:57.735: INFO: Pod pod-secrets-c604314d-dc88-49de-928a-043858c4bfe4 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Jun  1 17:13:57.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-302" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":292,"completed":36,"skipped":617,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Jun  1 17:13:57.914: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-4476bda0-baef-4717-94f9-d3e894e76814" in namespace "security-context-test-6916" to be "Succeeded or Failed"
Jun  1 17:13:57.925: INFO: Pod "busybox-readonly-false-4476bda0-baef-4717-94f9-d3e894e76814": Phase="Pending", Reason="", readiness=false. Elapsed: 11.29036ms
Jun  1 17:13:59.949: INFO: Pod "busybox-readonly-false-4476bda0-baef-4717-94f9-d3e894e76814": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035560367s
Jun  1 17:14:01.969: INFO: Pod "busybox-readonly-false-4476bda0-baef-4717-94f9-d3e894e76814": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055562469s
Jun  1 17:14:01.969: INFO: Pod "busybox-readonly-false-4476bda0-baef-4717-94f9-d3e894e76814" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Jun  1 17:14:01.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6916" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":292,"completed":37,"skipped":704,"failed":0}
SSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 10 lines ...
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
Jun  1 17:14:07.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3233" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":292,"completed":38,"skipped":711,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
  test/e2e/framework/framework.go:175
Jun  1 17:14:14.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2876" for this suite.
STEP: Destroying namespace "webhook-2876-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":292,"completed":39,"skipped":714,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
Jun  1 17:14:21.165: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:21.185: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:21.233: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:21.264: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:21.285: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:21.298: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:21.334: INFO: Lookups using dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8714.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8714.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local jessie_udp@dns-test-service-2.dns-8714.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8714.svc.cluster.local]

Jun  1 17:14:26.341: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:26.353: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:26.375: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:26.397: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:26.433: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:26.445: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:26.468: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:26.486: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:26.541: INFO: Lookups using dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8714.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8714.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local jessie_udp@dns-test-service-2.dns-8714.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8714.svc.cluster.local]

Jun  1 17:14:31.362: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:31.386: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:31.398: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:31.418: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:31.469: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:31.483: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:31.494: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:31.509: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:31.529: INFO: Lookups using dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8714.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8714.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local jessie_udp@dns-test-service-2.dns-8714.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8714.svc.cluster.local]

Jun  1 17:14:36.357: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:36.365: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:36.377: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:36.404: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:36.472: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:36.482: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:36.496: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:36.514: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:36.551: INFO: Lookups using dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8714.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8714.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local jessie_udp@dns-test-service-2.dns-8714.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8714.svc.cluster.local]

Jun  1 17:14:41.357: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:41.369: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:41.384: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:41.404: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:41.465: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:41.485: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:41.512: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:41.545: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:41.588: INFO: Lookups using dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8714.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8714.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local jessie_udp@dns-test-service-2.dns-8714.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8714.svc.cluster.local]

Jun  1 17:14:46.347: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:46.370: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:46.390: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:46.402: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:46.462: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:46.492: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:46.508: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:46.532: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8714.svc.cluster.local from pod dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638: the server could not find the requested resource (get pods dns-test-5151b111-513b-444b-9156-7973b4414638)
Jun  1 17:14:46.580: INFO: Lookups using dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8714.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8714.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8714.svc.cluster.local jessie_udp@dns-test-service-2.dns-8714.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8714.svc.cluster.local]

Jun  1 17:14:51.601: INFO: DNS probes using dns-8714/dns-test-5151b111-513b-444b-9156-7973b4414638 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Jun  1 17:14:51.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8714" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":292,"completed":40,"skipped":776,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Jun  1 17:14:51.820: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in container's command
Jun  1 17:14:52.030: INFO: Waiting up to 5m0s for pod "var-expansion-af53ccfe-e5fe-4760-86a8-2731c0c19f2d" in namespace "var-expansion-1921" to be "Succeeded or Failed"
Jun  1 17:14:52.109: INFO: Pod "var-expansion-af53ccfe-e5fe-4760-86a8-2731c0c19f2d": Phase="Pending", Reason="", readiness=false. Elapsed: 78.898063ms
Jun  1 17:14:54.127: INFO: Pod "var-expansion-af53ccfe-e5fe-4760-86a8-2731c0c19f2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097082186s
Jun  1 17:14:56.145: INFO: Pod "var-expansion-af53ccfe-e5fe-4760-86a8-2731c0c19f2d": Phase="Running", Reason="", readiness=true. Elapsed: 4.115057613s
Jun  1 17:14:58.157: INFO: Pod "var-expansion-af53ccfe-e5fe-4760-86a8-2731c0c19f2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.126757172s
STEP: Saw pod success
Jun  1 17:14:58.157: INFO: Pod "var-expansion-af53ccfe-e5fe-4760-86a8-2731c0c19f2d" satisfied condition "Succeeded or Failed"
Jun  1 17:14:58.177: INFO: Trying to get logs from node kind-worker pod var-expansion-af53ccfe-e5fe-4760-86a8-2731c0c19f2d container dapi-container: <nil>
STEP: delete the pod
Jun  1 17:14:58.282: INFO: Waiting for pod var-expansion-af53ccfe-e5fe-4760-86a8-2731c0c19f2d to disappear
Jun  1 17:14:58.302: INFO: Pod var-expansion-af53ccfe-e5fe-4760-86a8-2731c0c19f2d no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Jun  1 17:14:58.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1921" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":292,"completed":41,"skipped":797,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Service endpoints latency
... skipping 419 lines ...
Jun  1 17:15:16.058: INFO: 99 %ile: 1.624762375s
Jun  1 17:15:16.058: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  test/e2e/framework/framework.go:175
Jun  1 17:15:16.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-431" for this suite.
•{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":292,"completed":42,"skipped":812,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
Jun  1 17:15:16.132: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test override arguments
Jun  1 17:15:16.274: INFO: Waiting up to 5m0s for pod "client-containers-803edfb0-77fe-4838-9ecd-465685e5f9cb" in namespace "containers-3484" to be "Succeeded or Failed"
Jun  1 17:15:16.308: INFO: Pod "client-containers-803edfb0-77fe-4838-9ecd-465685e5f9cb": Phase="Pending", Reason="", readiness=false. Elapsed: 32.380184ms
Jun  1 17:15:18.325: INFO: Pod "client-containers-803edfb0-77fe-4838-9ecd-465685e5f9cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049006513s
Jun  1 17:15:20.341: INFO: Pod "client-containers-803edfb0-77fe-4838-9ecd-465685e5f9cb": Phase="Running", Reason="", readiness=true. Elapsed: 4.064741364s
Jun  1 17:15:22.360: INFO: Pod "client-containers-803edfb0-77fe-4838-9ecd-465685e5f9cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.083783435s
STEP: Saw pod success
Jun  1 17:15:22.361: INFO: Pod "client-containers-803edfb0-77fe-4838-9ecd-465685e5f9cb" satisfied condition "Succeeded or Failed"
Jun  1 17:15:22.373: INFO: Trying to get logs from node kind-worker pod client-containers-803edfb0-77fe-4838-9ecd-465685e5f9cb container test-container: <nil>
STEP: delete the pod
Jun  1 17:15:22.454: INFO: Waiting for pod client-containers-803edfb0-77fe-4838-9ecd-465685e5f9cb to disappear
Jun  1 17:15:22.462: INFO: Pod client-containers-803edfb0-77fe-4838-9ecd-465685e5f9cb no longer exists
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
Jun  1 17:15:22.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3484" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":292,"completed":43,"skipped":836,"failed":0}

------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
Jun  1 17:15:27.855: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Jun  1 17:15:28.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9737" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":292,"completed":44,"skipped":836,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 17:15:28.260: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8478ecc5-02a8-4b3f-a58b-4086d1d13a48" in namespace "projected-1173" to be "Succeeded or Failed"
Jun  1 17:15:28.293: INFO: Pod "downwardapi-volume-8478ecc5-02a8-4b3f-a58b-4086d1d13a48": Phase="Pending", Reason="", readiness=false. Elapsed: 32.601226ms
Jun  1 17:15:30.396: INFO: Pod "downwardapi-volume-8478ecc5-02a8-4b3f-a58b-4086d1d13a48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13607158s
Jun  1 17:15:32.452: INFO: Pod "downwardapi-volume-8478ecc5-02a8-4b3f-a58b-4086d1d13a48": Phase="Pending", Reason="", readiness=false. Elapsed: 4.191466149s
Jun  1 17:15:34.483: INFO: Pod "downwardapi-volume-8478ecc5-02a8-4b3f-a58b-4086d1d13a48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.222809194s
STEP: Saw pod success
Jun  1 17:15:34.483: INFO: Pod "downwardapi-volume-8478ecc5-02a8-4b3f-a58b-4086d1d13a48" satisfied condition "Succeeded or Failed"
Jun  1 17:15:34.486: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-8478ecc5-02a8-4b3f-a58b-4086d1d13a48 container client-container: <nil>
STEP: delete the pod
Jun  1 17:15:34.658: INFO: Waiting for pod downwardapi-volume-8478ecc5-02a8-4b3f-a58b-4086d1d13a48 to disappear
Jun  1 17:15:34.689: INFO: Pod downwardapi-volume-8478ecc5-02a8-4b3f-a58b-4086d1d13a48 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Jun  1 17:15:34.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1173" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":292,"completed":45,"skipped":885,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl server-side dry-run 
  should check if kubectl can dry-run update Pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 22 lines ...
Jun  1 17:15:41.415: INFO: stderr: ""
Jun  1 17:15:41.415: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 17:15:41.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-636" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":292,"completed":46,"skipped":898,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
Jun  1 17:15:46.937: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Jun  1 17:15:46.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6443" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":292,"completed":47,"skipped":920,"failed":0}
SS
------------------------------
[sig-scheduling] LimitRange 
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] LimitRange
... skipping 31 lines ...
Jun  1 17:15:54.861: INFO: limitRange is already deleted
STEP: Creating a Pod with more than former max resources
[AfterEach] [sig-scheduling] LimitRange
  test/e2e/framework/framework.go:175
Jun  1 17:15:54.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "limitrange-4730" for this suite.
•{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":292,"completed":48,"skipped":922,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 17:15:54.956: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jun  1 17:15:55.165: INFO: Waiting up to 5m0s for pod "pod-4dc15c00-d234-4431-a4e0-5452dfea930d" in namespace "emptydir-5585" to be "Succeeded or Failed"
Jun  1 17:15:55.185: INFO: Pod "pod-4dc15c00-d234-4431-a4e0-5452dfea930d": Phase="Pending", Reason="", readiness=false. Elapsed: 19.238541ms
Jun  1 17:15:57.205: INFO: Pod "pod-4dc15c00-d234-4431-a4e0-5452dfea930d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039426496s
Jun  1 17:15:59.221: INFO: Pod "pod-4dc15c00-d234-4431-a4e0-5452dfea930d": Phase="Running", Reason="", readiness=true. Elapsed: 4.055731174s
Jun  1 17:16:01.251: INFO: Pod "pod-4dc15c00-d234-4431-a4e0-5452dfea930d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.08519358s
STEP: Saw pod success
Jun  1 17:16:01.251: INFO: Pod "pod-4dc15c00-d234-4431-a4e0-5452dfea930d" satisfied condition "Succeeded or Failed"
Jun  1 17:16:01.269: INFO: Trying to get logs from node kind-worker pod pod-4dc15c00-d234-4431-a4e0-5452dfea930d container test-container: <nil>
STEP: delete the pod
Jun  1 17:16:01.416: INFO: Waiting for pod pod-4dc15c00-d234-4431-a4e0-5452dfea930d to disappear
Jun  1 17:16:01.419: INFO: Pod pod-4dc15c00-d234-4431-a4e0-5452dfea930d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 17:16:01.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5585" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":49,"skipped":932,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 17:16:01.488: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir volume type on node default medium
Jun  1 17:16:01.717: INFO: Waiting up to 5m0s for pod "pod-2278f358-be51-4a59-a5e7-e66c5afac401" in namespace "emptydir-8580" to be "Succeeded or Failed"
Jun  1 17:16:01.759: INFO: Pod "pod-2278f358-be51-4a59-a5e7-e66c5afac401": Phase="Pending", Reason="", readiness=false. Elapsed: 39.60528ms
Jun  1 17:16:03.770: INFO: Pod "pod-2278f358-be51-4a59-a5e7-e66c5afac401": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050087755s
Jun  1 17:16:05.793: INFO: Pod "pod-2278f358-be51-4a59-a5e7-e66c5afac401": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072718288s
STEP: Saw pod success
Jun  1 17:16:05.793: INFO: Pod "pod-2278f358-be51-4a59-a5e7-e66c5afac401" satisfied condition "Succeeded or Failed"
Jun  1 17:16:05.798: INFO: Trying to get logs from node kind-worker pod pod-2278f358-be51-4a59-a5e7-e66c5afac401 container test-container: <nil>
STEP: delete the pod
Jun  1 17:16:05.921: INFO: Waiting for pod pod-2278f358-be51-4a59-a5e7-e66c5afac401 to disappear
Jun  1 17:16:05.934: INFO: Pod pod-2278f358-be51-4a59-a5e7-e66c5afac401 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 17:16:05.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8580" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":50,"skipped":945,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 8 lines ...
Jun  1 17:16:06.258: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"14367353-c54b-4161-b4b3-6c9cdb52e285", Controller:(*bool)(0xc001ce7996), BlockOwnerDeletion:(*bool)(0xc001ce7997)}}
Jun  1 17:16:06.304: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"f2e8b82a-2109-449a-abc3-d2054775e6a9", Controller:(*bool)(0xc001ce7b76), BlockOwnerDeletion:(*bool)(0xc001ce7b77)}}
[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Jun  1 17:16:11.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1327" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":292,"completed":51,"skipped":958,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Jun  1 17:16:11.405: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Jun  1 17:16:11.540: INFO: Waiting up to 5m0s for pod "downward-api-9bfdeceb-c17d-49f4-964d-1097896bea32" in namespace "downward-api-867" to be "Succeeded or Failed"
Jun  1 17:16:11.568: INFO: Pod "downward-api-9bfdeceb-c17d-49f4-964d-1097896bea32": Phase="Pending", Reason="", readiness=false. Elapsed: 27.935688ms
Jun  1 17:16:13.589: INFO: Pod "downward-api-9bfdeceb-c17d-49f4-964d-1097896bea32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048100713s
Jun  1 17:16:15.608: INFO: Pod "downward-api-9bfdeceb-c17d-49f4-964d-1097896bea32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067139909s
STEP: Saw pod success
Jun  1 17:16:15.608: INFO: Pod "downward-api-9bfdeceb-c17d-49f4-964d-1097896bea32" satisfied condition "Succeeded or Failed"
Jun  1 17:16:15.620: INFO: Trying to get logs from node kind-worker pod downward-api-9bfdeceb-c17d-49f4-964d-1097896bea32 container dapi-container: <nil>
STEP: delete the pod
Jun  1 17:16:15.717: INFO: Waiting for pod downward-api-9bfdeceb-c17d-49f4-964d-1097896bea32 to disappear
Jun  1 17:16:15.734: INFO: Pod downward-api-9bfdeceb-c17d-49f4-964d-1097896bea32 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Jun  1 17:16:15.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-867" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":292,"completed":52,"skipped":981,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  1 17:16:15.778: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
Jun  1 17:18:15.930: INFO: Deleting pod "var-expansion-9e79a6a3-2d3b-44dd-a486-9ce137896c99" in namespace "var-expansion-1404"
Jun  1 17:18:15.961: INFO: Wait up to 5m0s for pod "var-expansion-9e79a6a3-2d3b-44dd-a486-9ce137896c99" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Jun  1 17:18:21.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1404" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":292,"completed":53,"skipped":1017,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 29 lines ...
Jun  1 17:18:50.875: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jun  1 17:18:51.283: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
Jun  1 17:18:51.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9532" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":54,"skipped":1032,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 17:18:51.513: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c0e9ec41-3dca-4b3e-8da0-39858389a507" in namespace "downward-api-156" to be "Succeeded or Failed"
Jun  1 17:18:51.530: INFO: Pod "downwardapi-volume-c0e9ec41-3dca-4b3e-8da0-39858389a507": Phase="Pending", Reason="", readiness=false. Elapsed: 15.40761ms
Jun  1 17:18:53.553: INFO: Pod "downwardapi-volume-c0e9ec41-3dca-4b3e-8da0-39858389a507": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038442743s
Jun  1 17:18:55.578: INFO: Pod "downwardapi-volume-c0e9ec41-3dca-4b3e-8da0-39858389a507": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063202108s
STEP: Saw pod success
Jun  1 17:18:55.580: INFO: Pod "downwardapi-volume-c0e9ec41-3dca-4b3e-8da0-39858389a507" satisfied condition "Succeeded or Failed"
Jun  1 17:18:55.596: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-c0e9ec41-3dca-4b3e-8da0-39858389a507 container client-container: <nil>
STEP: delete the pod
Jun  1 17:18:55.692: INFO: Waiting for pod downwardapi-volume-c0e9ec41-3dca-4b3e-8da0-39858389a507 to disappear
Jun  1 17:18:55.724: INFO: Pod downwardapi-volume-c0e9ec41-3dca-4b3e-8da0-39858389a507 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Jun  1 17:18:55.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-156" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":292,"completed":55,"skipped":1121,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-node] PodTemplates 
  should run the lifecycle of PodTemplates [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] PodTemplates
... skipping 5 lines ...
[It] should run the lifecycle of PodTemplates [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [sig-node] PodTemplates
  test/e2e/framework/framework.go:175
Jun  1 17:18:56.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-4009" for this suite.
•{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":292,"completed":56,"skipped":1134,"failed":0}
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 22 lines ...
Jun  1 17:19:26.527: INFO: Waiting for statefulset status.replicas updated to 0
Jun  1 17:19:26.549: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Jun  1 17:19:26.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7497" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":292,"completed":57,"skipped":1136,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 7 lines ...
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Jun  1 17:19:32.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9041" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":292,"completed":58,"skipped":1148,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 17:19:32.154: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir volume type on tmpfs
Jun  1 17:19:32.308: INFO: Waiting up to 5m0s for pod "pod-8be9690d-82c3-4ffb-b652-89feebda42b0" in namespace "emptydir-1087" to be "Succeeded or Failed"
Jun  1 17:19:32.321: INFO: Pod "pod-8be9690d-82c3-4ffb-b652-89feebda42b0": Phase="Pending", Reason="", readiness=false. Elapsed: 12.743156ms
Jun  1 17:19:34.342: INFO: Pod "pod-8be9690d-82c3-4ffb-b652-89feebda42b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033623963s
Jun  1 17:19:36.353: INFO: Pod "pod-8be9690d-82c3-4ffb-b652-89feebda42b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044849488s
STEP: Saw pod success
Jun  1 17:19:36.353: INFO: Pod "pod-8be9690d-82c3-4ffb-b652-89feebda42b0" satisfied condition "Succeeded or Failed"
Jun  1 17:19:36.362: INFO: Trying to get logs from node kind-worker pod pod-8be9690d-82c3-4ffb-b652-89feebda42b0 container test-container: <nil>
STEP: delete the pod
Jun  1 17:19:36.449: INFO: Waiting for pod pod-8be9690d-82c3-4ffb-b652-89feebda42b0 to disappear
Jun  1 17:19:36.461: INFO: Pod pod-8be9690d-82c3-4ffb-b652-89feebda42b0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 17:19:36.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1087" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":59,"skipped":1181,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 47 lines ...
Jun  1 17:20:01.172: INFO: Pod "test-rollover-deployment-7c4fd9c879-t758r" is available:
&Pod{ObjectMeta:{test-rollover-deployment-7c4fd9c879-t758r test-rollover-deployment-7c4fd9c879- deployment-1054 /api/v1/namespaces/deployment-1054/pods/test-rollover-deployment-7c4fd9c879-t758r e830dd8a-6772-4812-994e-53ee49c3cf85 8187 0 2020-06-01 17:19:45 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [{apps/v1 ReplicaSet test-rollover-deployment-7c4fd9c879 89b39758-64fc-4eae-a1ed-9bc54280e31a 0xc003ff4747 0xc003ff4748}] []  [{kube-controller-manager Update v1 2020-06-01 17:19:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"89b39758-64fc-4eae-a1ed-9bc54280e31a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-01 17:19:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.26\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v8pnb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v8pnb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v8pnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 17:19:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 17:19:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 17:19:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 17:19:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.2.26,StartTime:2020-06-01 17:19:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-01 17:19:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://4f0776807da65748c6dfcb37719cf58c38aa9a94d8ac464936a9568c0fd996a2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.26,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Jun  1 17:20:01.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1054" for this suite.
•{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":292,"completed":60,"skipped":1189,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 28 lines ...
Jun  1 17:20:21.095: INFO: stderr: ""
Jun  1 17:20:21.095: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 17:20:21.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4997" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":292,"completed":61,"skipped":1204,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 17:20:21.343: INFO: Waiting up to 5m0s for pod "downwardapi-volume-61f3cde8-799f-4539-8aad-751d41057976" in namespace "projected-6601" to be "Succeeded or Failed"
Jun  1 17:20:21.352: INFO: Pod "downwardapi-volume-61f3cde8-799f-4539-8aad-751d41057976": Phase="Pending", Reason="", readiness=false. Elapsed: 8.8129ms
Jun  1 17:20:23.370: INFO: Pod "downwardapi-volume-61f3cde8-799f-4539-8aad-751d41057976": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026358026s
Jun  1 17:20:25.386: INFO: Pod "downwardapi-volume-61f3cde8-799f-4539-8aad-751d41057976": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042531403s
STEP: Saw pod success
Jun  1 17:20:25.386: INFO: Pod "downwardapi-volume-61f3cde8-799f-4539-8aad-751d41057976" satisfied condition "Succeeded or Failed"
Jun  1 17:20:25.397: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-61f3cde8-799f-4539-8aad-751d41057976 container client-container: <nil>
STEP: delete the pod
Jun  1 17:20:25.485: INFO: Waiting for pod downwardapi-volume-61f3cde8-799f-4539-8aad-751d41057976 to disappear
Jun  1 17:20:25.501: INFO: Pod downwardapi-volume-61f3cde8-799f-4539-8aad-751d41057976 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Jun  1 17:20:25.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6601" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":292,"completed":62,"skipped":1219,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 13 lines ...
Jun  1 17:21:24.172: INFO: Restart count of pod container-probe-9715/busybox-0c70418b-169b-4ab8-b678-b0c35e0a2f48 is now 1 (54.443636175s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Jun  1 17:21:24.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9715" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":292,"completed":63,"skipped":1251,"failed":0}
SSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] PreStop
... skipping 26 lines ...
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  test/e2e/framework/framework.go:175
Jun  1 17:21:37.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-8717" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":292,"completed":64,"skipped":1255,"failed":0}

------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
Jun  1 17:21:38.357: INFO: stderr: ""
Jun  1 17:21:38.359: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 17:21:38.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5156" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":292,"completed":65,"skipped":1255,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 12 lines ...
Jun  1 17:21:44.577: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jun  1 17:21:45.018: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 17:21:45.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5338" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":292,"completed":66,"skipped":1260,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should find a service from listing all namespaces [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 10 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 17:21:45.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3811" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":292,"completed":67,"skipped":1301,"failed":0}
SS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap configmap-8476/configmap-test-07e82d65-c2a3-4dc0-b879-e5ed1ce1d1ec
STEP: Creating a pod to test consume configMaps
Jun  1 17:21:45.334: INFO: Waiting up to 5m0s for pod "pod-configmaps-d404a399-d77f-4fed-9669-fda316b3b1f7" in namespace "configmap-8476" to be "Succeeded or Failed"
Jun  1 17:21:45.355: INFO: Pod "pod-configmaps-d404a399-d77f-4fed-9669-fda316b3b1f7": Phase="Pending", Reason="", readiness=false. Elapsed: 20.646303ms
Jun  1 17:21:47.388: INFO: Pod "pod-configmaps-d404a399-d77f-4fed-9669-fda316b3b1f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054240992s
Jun  1 17:21:49.397: INFO: Pod "pod-configmaps-d404a399-d77f-4fed-9669-fda316b3b1f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063441419s
STEP: Saw pod success
Jun  1 17:21:49.397: INFO: Pod "pod-configmaps-d404a399-d77f-4fed-9669-fda316b3b1f7" satisfied condition "Succeeded or Failed"
Jun  1 17:21:49.416: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-d404a399-d77f-4fed-9669-fda316b3b1f7 container env-test: <nil>
STEP: delete the pod
Jun  1 17:21:49.505: INFO: Waiting for pod pod-configmaps-d404a399-d77f-4fed-9669-fda316b3b1f7 to disappear
Jun  1 17:21:49.512: INFO: Pod pod-configmaps-d404a399-d77f-4fed-9669-fda316b3b1f7 no longer exists
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 17:21:49.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8476" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":292,"completed":68,"skipped":1303,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 17 lines ...
Jun  1 17:24:22.917: INFO: Restart count of pod container-probe-2930/liveness-77ae5b26-2063-4971-bf24-a79d8d8d419e is now 5 (2m29.228651453s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Jun  1 17:24:22.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2930" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":292,"completed":69,"skipped":1319,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Job
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  1 17:24:23.004: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:175
Jun  1 17:24:37.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-2861" for this suite.
•{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":292,"completed":70,"skipped":1344,"failed":0}
SSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 5 lines ...
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:90
Jun  1 17:24:37.309: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jun  1 17:24:37.365: INFO: Waiting for terminating namespaces to be deleted...
Jun  1 17:24:37.390: INFO: 
Logging pods the apiserver thinks is on node kind-worker before test
Jun  1 17:24:37.412: INFO: fail-once-local-6bvkz from job-2861 started at 2020-06-01 17:24:23 +0000 UTC (1 container statuses recorded)
Jun  1 17:24:37.413: INFO: 	Container c ready: false, restart count 1
Jun  1 17:24:37.413: INFO: fail-once-local-nvspp from job-2861 started at 2020-06-01 17:24:29 +0000 UTC (1 container statuses recorded)
Jun  1 17:24:37.414: INFO: 	Container c ready: false, restart count 1
Jun  1 17:24:37.414: INFO: kindnet-9m9qw from kube-system started at 2020-06-01 17:08:21 +0000 UTC (1 container statuses recorded)
Jun  1 17:24:37.414: INFO: 	Container kindnet-cni ready: true, restart count 0
Jun  1 17:24:37.414: INFO: kube-proxy-gbd88 from kube-system started at 2020-06-01 17:01:21 +0000 UTC (1 container statuses recorded)
Jun  1 17:24:37.414: INFO: 	Container kube-proxy ready: true, restart count 0
Jun  1 17:24:37.414: INFO: 
Logging pods the apiserver thinks is on node kind-worker2 before test
Jun  1 17:24:37.450: INFO: fail-once-local-wszs9 from job-2861 started at 2020-06-01 17:24:29 +0000 UTC (1 container statuses recorded)
Jun  1 17:24:37.452: INFO: 	Container c ready: false, restart count 1
Jun  1 17:24:37.453: INFO: fail-once-local-z2tx4 from job-2861 started at 2020-06-01 17:24:23 +0000 UTC (1 container statuses recorded)
Jun  1 17:24:37.453: INFO: 	Container c ready: false, restart count 1
Jun  1 17:24:37.453: INFO: coredns-66bff467f8-7ps7n from kube-system started at 2020-06-01 17:01:59 +0000 UTC (1 container statuses recorded)
Jun  1 17:24:37.453: INFO: 	Container coredns ready: true, restart count 0
Jun  1 17:24:37.453: INFO: kindnet-8fj6z from kube-system started at 2020-06-01 17:01:21 +0000 UTC (1 container statuses recorded)
Jun  1 17:24:37.453: INFO: 	Container kindnet-cni ready: true, restart count 0
Jun  1 17:24:37.453: INFO: kube-proxy-4rxn4 from kube-system started at 2020-06-01 17:01:21 +0000 UTC (1 container statuses recorded)
... skipping 10 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
Jun  1 17:24:45.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4008" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":292,"completed":71,"skipped":1349,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  1 17:24:45.726: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename webhook
... skipping 7 lines ...
Jun  1 17:24:47.041: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726629086, loc:(*time.Location)(0x8006d20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726629086, loc:(*time.Location)(0x8006d20)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-75dd644756\""}}, CollisionCount:(*int32)(nil)}
Jun  1 17:24:49.064: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726629087, loc:(*time.Location)(0x8006d20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726629087, loc:(*time.Location)(0x8006d20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726629087, loc:(*time.Location)(0x8006d20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726629086, loc:(*time.Location)(0x8006d20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun  1 17:24:51.053: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726629087, loc:(*time.Location)(0x8006d20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726629087, loc:(*time.Location)(0x8006d20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726629087, loc:(*time.Location)(0x8006d20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726629086, loc:(*time.Location)(0x8006d20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jun  1 17:24:54.100: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  test/e2e/framework/framework.go:597
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 17:24:55.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4374" for this suite.
STEP: Destroying namespace "webhook-4374-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":292,"completed":72,"skipped":1354,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-73655981-bcd3-4765-ac20-d473c0dfa201
STEP: Creating a pod to test consume configMaps
Jun  1 17:24:55.845: INFO: Waiting up to 5m0s for pod "pod-configmaps-3506f57b-1667-4def-bc8e-8da5cf954741" in namespace "configmap-4731" to be "Succeeded or Failed"
Jun  1 17:24:55.900: INFO: Pod "pod-configmaps-3506f57b-1667-4def-bc8e-8da5cf954741": Phase="Pending", Reason="", readiness=false. Elapsed: 54.963334ms
Jun  1 17:24:57.920: INFO: Pod "pod-configmaps-3506f57b-1667-4def-bc8e-8da5cf954741": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074875811s
Jun  1 17:24:59.928: INFO: Pod "pod-configmaps-3506f57b-1667-4def-bc8e-8da5cf954741": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082570943s
STEP: Saw pod success
Jun  1 17:24:59.928: INFO: Pod "pod-configmaps-3506f57b-1667-4def-bc8e-8da5cf954741" satisfied condition "Succeeded or Failed"
Jun  1 17:24:59.933: INFO: Trying to get logs from node kind-worker pod pod-configmaps-3506f57b-1667-4def-bc8e-8da5cf954741 container configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 17:25:00.006: INFO: Waiting for pod pod-configmaps-3506f57b-1667-4def-bc8e-8da5cf954741 to disappear
Jun  1 17:25:00.015: INFO: Pod pod-configmaps-3506f57b-1667-4def-bc8e-8da5cf954741 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 17:25:00.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4731" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":292,"completed":73,"skipped":1373,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Jun  1 17:25:00.201: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-8c7a4ad7-0ad0-4419-8cd5-b6cf166c4c21" in namespace "security-context-test-8258" to be "Succeeded or Failed"
Jun  1 17:25:00.213: INFO: Pod "alpine-nnp-false-8c7a4ad7-0ad0-4419-8cd5-b6cf166c4c21": Phase="Pending", Reason="", readiness=false. Elapsed: 11.930946ms
Jun  1 17:25:02.225: INFO: Pod "alpine-nnp-false-8c7a4ad7-0ad0-4419-8cd5-b6cf166c4c21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024753635s
Jun  1 17:25:04.245: INFO: Pod "alpine-nnp-false-8c7a4ad7-0ad0-4419-8cd5-b6cf166c4c21": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044031159s
Jun  1 17:25:06.258: INFO: Pod "alpine-nnp-false-8c7a4ad7-0ad0-4419-8cd5-b6cf166c4c21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.057064396s
Jun  1 17:25:06.258: INFO: Pod "alpine-nnp-false-8c7a4ad7-0ad0-4419-8cd5-b6cf166c4c21" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Jun  1 17:25:06.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-8258" for this suite.
•{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":74,"skipped":1385,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 127 lines ...
Jun  1 17:26:04.544: INFO: ss-2  kind-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 17:25:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 17:25:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 17:25:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 17:25:28 +0000 UTC  }]
Jun  1 17:26:04.544: INFO: 
Jun  1 17:26:04.544: INFO: StatefulSet ss has not reached scale 0, at 1
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4354
Jun  1 17:26:05.551: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 17:26:06.286: INFO: rc: 1
Jun  1 17:26:06.288: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jun  1 17:26:16.288: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 17:26:16.860: INFO: rc: 1
Jun  1 17:26:16.860: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 17:26:26.862: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 17:26:27.479: INFO: rc: 1
Jun  1 17:26:27.480: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 17:26:37.482: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 17:26:38.086: INFO: rc: 1
Jun  1 17:26:38.086: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 17:26:48.092: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 17:26:48.768: INFO: rc: 1
Jun  1 17:26:48.769: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 17:26:58.771: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 17:26:59.389: INFO: rc: 1
Jun  1 17:26:59.392: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 17:27:09.392: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 17:27:10.058: INFO: rc: 1
Jun  1 17:27:10.059: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 17:27:20.060: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 17:27:20.629: INFO: rc: 1
Jun  1 17:27:20.629: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 17:27:30.630: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 17:27:31.225: INFO: rc: 1
Jun  1 17:27:31.226: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 17:27:41.229: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 17:27:41.798: INFO: rc: 1
Jun  1 17:27:41.798: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 17:27:51.801: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 17:27:52.364: INFO: rc: 1
Jun  1 17:27:52.364: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 17:28:02.365: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 17:28:02.872: INFO: rc: 1
Jun  1 17:28:02.873: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 17:28:12.876: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 17:28:13.521: INFO: rc: 1
Jun  1 17:28:13.524: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 17:28:23.532: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 17:28:24.156: INFO: rc: 1
Jun  1 17:28:24.156: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 17:28:34.159: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 17:28:34.829: INFO: rc: 1
Jun  1 17:28:34.829: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 17:28:44.829: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 17:28:45.449: INFO: rc: 1
Jun  1 17:28:45.449: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 17:28:55.452: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 17:28:56.094: INFO: rc: 1
Jun  1 17:28:56.094: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 17:29:06.095: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 17:29:06.715: INFO: rc: 1
Jun  1 17:29:06.716: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 17:29:16.718: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 17:29:17.264: INFO: rc: 1
Jun  1 17:29:17.264: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 17:29:27.264: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 17:29:27.829: INFO: rc: 1
Jun  1 17:29:27.830: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 17:29:37.832: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 17:29:38.476: INFO: rc: 1
Jun  1 17:29:38.476: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 17:29:48.481: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 17:29:49.356: INFO: rc: 1
Jun  1 17:29:49.356: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 17:29:59.357: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 17:29:59.964: INFO: rc: 1
Jun  1 17:29:59.964: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 17:30:09.965: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 17:30:10.613: INFO: rc: 1
Jun  1 17:30:10.614: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 17:30:20.615: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 17:30:21.192: INFO: rc: 1
Jun  1 17:30:21.192: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 17:30:31.193: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 17:30:31.773: INFO: rc: 1
Jun  1 17:30:31.775: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 17:30:41.778: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 17:30:42.322: INFO: rc: 1
Jun  1 17:30:42.322: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 17:30:52.328: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 17:30:52.958: INFO: rc: 1
Jun  1 17:30:52.958: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 17:31:02.960: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 17:31:03.525: INFO: rc: 1
Jun  1 17:31:03.525: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 17:31:13.526: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-4354 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 17:31:14.136: INFO: rc: 1
Jun  1 17:31:14.136: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: 
Jun  1 17:31:14.136: INFO: Scaling statefulset ss to 0
Jun  1 17:31:14.181: INFO: Waiting for statefulset status.replicas updated to 0
... skipping 13 lines ...
test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/framework/framework.go:592
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":292,"completed":75,"skipped":1403,"failed":0}
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 34 lines ...

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
W0601 17:31:15.180594   12365 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jun  1 17:31:15.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6035" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":292,"completed":76,"skipped":1406,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  test/e2e/framework/framework.go:175
Jun  1 17:31:24.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-240" for this suite.
STEP: Destroying namespace "webhook-240-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":292,"completed":77,"skipped":1448,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 9 lines ...
STEP: Updating configmap projected-configmap-test-upd-758ac199-00ea-48fa-ba78-03639eda6738
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Jun  1 17:31:30.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4250" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":78,"skipped":1494,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 103 lines ...
Jun  1 17:31:43.880: INFO: Pod "webserver-deployment-84855cf797-zjp7w" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-zjp7w webserver-deployment-84855cf797- deployment-8680 /api/v1/namespaces/deployment-8680/pods/webserver-deployment-84855cf797-zjp7w 7d5515dc-a08f-4ad6-8bd8-d67bb3803b0d 11202 0 2020-06-01 17:31:31 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 61f17b05-f0b2-4d35-baed-b3b3c1991c37 0xc0024e7d70 0xc0024e7d71}] []  [{kube-controller-manager Update v1 2020-06-01 17:31:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"61f17b05-f0b2-4d35-baed-b3b3c1991c37\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-01 17:31:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.71\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gdt5t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gdt5t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gdt5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 17:31:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 17:31:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 17:31:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 17:31:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.71,StartTime:2020-06-01 17:31:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-01 17:31:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://be2cc149f627a38917a7ee1ce3863c4c922d7f78a59f37f83915494e91a330e8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.71,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Jun  1 17:31:43.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8680" for this suite.
•{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":292,"completed":79,"skipped":1535,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 29 lines ...
  test/e2e/framework/framework.go:175
Jun  1 17:31:58.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4592" for this suite.
STEP: Destroying namespace "webhook-4592-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":292,"completed":80,"skipped":1541,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 27 lines ...
Jun  1 17:32:21.566: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
Jun  1 17:32:21.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-3858" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":292,"completed":81,"skipped":1550,"failed":0}
SS
------------------------------
[sig-network] Services 
  should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 46 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 17:32:51.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9707" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":292,"completed":82,"skipped":1552,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-dd94e695-67da-45b9-9c58-a8942b4cd2d9
STEP: Creating a pod to test consume configMaps
Jun  1 17:32:51.489: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ffdff3d8-0730-414c-97f4-02fe15e44aae" in namespace "projected-8685" to be "Succeeded or Failed"
Jun  1 17:32:51.502: INFO: Pod "pod-projected-configmaps-ffdff3d8-0730-414c-97f4-02fe15e44aae": Phase="Pending", Reason="", readiness=false. Elapsed: 12.375223ms
Jun  1 17:32:53.517: INFO: Pod "pod-projected-configmaps-ffdff3d8-0730-414c-97f4-02fe15e44aae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027242204s
Jun  1 17:32:55.524: INFO: Pod "pod-projected-configmaps-ffdff3d8-0730-414c-97f4-02fe15e44aae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035061875s
STEP: Saw pod success
Jun  1 17:32:55.525: INFO: Pod "pod-projected-configmaps-ffdff3d8-0730-414c-97f4-02fe15e44aae" satisfied condition "Succeeded or Failed"
Jun  1 17:32:55.540: INFO: Trying to get logs from node kind-worker pod pod-projected-configmaps-ffdff3d8-0730-414c-97f4-02fe15e44aae container projected-configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 17:32:55.617: INFO: Waiting for pod pod-projected-configmaps-ffdff3d8-0730-414c-97f4-02fe15e44aae to disappear
Jun  1 17:32:55.625: INFO: Pod pod-projected-configmaps-ffdff3d8-0730-414c-97f4-02fe15e44aae no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Jun  1 17:32:55.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8685" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":292,"completed":83,"skipped":1574,"failed":0}
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-f7738bba-68a7-4f80-afb2-d6ec976d7f46
STEP: Creating a pod to test consume configMaps
Jun  1 17:32:55.779: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9c88c910-8f8d-4ec0-9754-7baa145cfc7a" in namespace "projected-5632" to be "Succeeded or Failed"
Jun  1 17:32:55.792: INFO: Pod "pod-projected-configmaps-9c88c910-8f8d-4ec0-9754-7baa145cfc7a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.818105ms
Jun  1 17:32:57.810: INFO: Pod "pod-projected-configmaps-9c88c910-8f8d-4ec0-9754-7baa145cfc7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03013377s
Jun  1 17:32:59.825: INFO: Pod "pod-projected-configmaps-9c88c910-8f8d-4ec0-9754-7baa145cfc7a": Phase="Running", Reason="", readiness=true. Elapsed: 4.045369038s
Jun  1 17:33:01.841: INFO: Pod "pod-projected-configmaps-9c88c910-8f8d-4ec0-9754-7baa145cfc7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.061944654s
STEP: Saw pod success
Jun  1 17:33:01.841: INFO: Pod "pod-projected-configmaps-9c88c910-8f8d-4ec0-9754-7baa145cfc7a" satisfied condition "Succeeded or Failed"
Jun  1 17:33:01.872: INFO: Trying to get logs from node kind-worker pod pod-projected-configmaps-9c88c910-8f8d-4ec0-9754-7baa145cfc7a container projected-configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 17:33:01.949: INFO: Waiting for pod pod-projected-configmaps-9c88c910-8f8d-4ec0-9754-7baa145cfc7a to disappear
Jun  1 17:33:01.965: INFO: Pod pod-projected-configmaps-9c88c910-8f8d-4ec0-9754-7baa145cfc7a no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Jun  1 17:33:01.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5632" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":292,"completed":84,"skipped":1578,"failed":0}
SSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 28 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
Jun  1 17:33:03.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4352" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":292,"completed":85,"skipped":1586,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Events 
  should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Events
... skipping 11 lines ...
STEP: deleting the test event
STEP: listing all events in all namespaces
[AfterEach] [sig-api-machinery] Events
  test/e2e/framework/framework.go:175
Jun  1 17:33:03.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-4778" for this suite.
•{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":292,"completed":86,"skipped":1602,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 12 lines ...
STEP: reading a file in the container
Jun  1 17:33:10.469: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl exec --namespace=svcaccounts-5210 pod-service-account-1e81a8b9-0c95-4f81-838c-da5d44978788 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  test/e2e/framework/framework.go:175
Jun  1 17:33:11.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-5210" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":292,"completed":87,"skipped":1639,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 17:33:11.558: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on node default medium
Jun  1 17:33:11.685: INFO: Waiting up to 5m0s for pod "pod-45cd5b7e-0b74-4b58-8834-8a1b9f9c82ac" in namespace "emptydir-5062" to be "Succeeded or Failed"
Jun  1 17:33:11.697: INFO: Pod "pod-45cd5b7e-0b74-4b58-8834-8a1b9f9c82ac": Phase="Pending", Reason="", readiness=false. Elapsed: 11.442439ms
Jun  1 17:33:13.717: INFO: Pod "pod-45cd5b7e-0b74-4b58-8834-8a1b9f9c82ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031417985s
Jun  1 17:33:15.744: INFO: Pod "pod-45cd5b7e-0b74-4b58-8834-8a1b9f9c82ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05833809s
STEP: Saw pod success
Jun  1 17:33:15.744: INFO: Pod "pod-45cd5b7e-0b74-4b58-8834-8a1b9f9c82ac" satisfied condition "Succeeded or Failed"
Jun  1 17:33:15.752: INFO: Trying to get logs from node kind-worker2 pod pod-45cd5b7e-0b74-4b58-8834-8a1b9f9c82ac container test-container: <nil>
STEP: delete the pod
Jun  1 17:33:15.883: INFO: Waiting for pod pod-45cd5b7e-0b74-4b58-8834-8a1b9f9c82ac to disappear
Jun  1 17:33:15.896: INFO: Pod pod-45cd5b7e-0b74-4b58-8834-8a1b9f9c82ac no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 17:33:15.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5062" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":88,"skipped":1664,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 16 lines ...
Jun  1 17:33:23.881: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  test/e2e/framework/framework.go:597
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 17:33:36.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7000" for this suite.
STEP: Destroying namespace "webhook-7000-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":292,"completed":89,"skipped":1706,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-6198a376-898d-41be-a223-94030ba1989b
STEP: Creating a pod to test consume configMaps
Jun  1 17:33:36.850: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-02dbc991-52d8-422e-b003-8a9823f4cbb6" in namespace "projected-1432" to be "Succeeded or Failed"
Jun  1 17:33:36.864: INFO: Pod "pod-projected-configmaps-02dbc991-52d8-422e-b003-8a9823f4cbb6": Phase="Pending", Reason="", readiness=false. Elapsed: 13.905874ms
Jun  1 17:33:38.884: INFO: Pod "pod-projected-configmaps-02dbc991-52d8-422e-b003-8a9823f4cbb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033792796s
Jun  1 17:33:40.901: INFO: Pod "pod-projected-configmaps-02dbc991-52d8-422e-b003-8a9823f4cbb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051188407s
STEP: Saw pod success
Jun  1 17:33:40.901: INFO: Pod "pod-projected-configmaps-02dbc991-52d8-422e-b003-8a9823f4cbb6" satisfied condition "Succeeded or Failed"
Jun  1 17:33:40.925: INFO: Trying to get logs from node kind-worker pod pod-projected-configmaps-02dbc991-52d8-422e-b003-8a9823f4cbb6 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 17:33:41.028: INFO: Waiting for pod pod-projected-configmaps-02dbc991-52d8-422e-b003-8a9823f4cbb6 to disappear
Jun  1 17:33:41.039: INFO: Pod pod-projected-configmaps-02dbc991-52d8-422e-b003-8a9823f4cbb6 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Jun  1 17:33:41.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1432" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":292,"completed":90,"skipped":1717,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-2bcbc506-382e-468d-aad3-9dbdaf25022c
STEP: Creating a pod to test consume configMaps
Jun  1 17:33:41.308: INFO: Waiting up to 5m0s for pod "pod-configmaps-8bd20487-fc6b-446f-b87e-b2cf49db2b79" in namespace "configmap-5610" to be "Succeeded or Failed"
Jun  1 17:33:41.327: INFO: Pod "pod-configmaps-8bd20487-fc6b-446f-b87e-b2cf49db2b79": Phase="Pending", Reason="", readiness=false. Elapsed: 18.820011ms
Jun  1 17:33:43.342: INFO: Pod "pod-configmaps-8bd20487-fc6b-446f-b87e-b2cf49db2b79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03371299s
Jun  1 17:33:45.361: INFO: Pod "pod-configmaps-8bd20487-fc6b-446f-b87e-b2cf49db2b79": Phase="Running", Reason="", readiness=true. Elapsed: 4.052460701s
Jun  1 17:33:47.379: INFO: Pod "pod-configmaps-8bd20487-fc6b-446f-b87e-b2cf49db2b79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.070255598s
STEP: Saw pod success
Jun  1 17:33:47.379: INFO: Pod "pod-configmaps-8bd20487-fc6b-446f-b87e-b2cf49db2b79" satisfied condition "Succeeded or Failed"
Jun  1 17:33:47.398: INFO: Trying to get logs from node kind-worker pod pod-configmaps-8bd20487-fc6b-446f-b87e-b2cf49db2b79 container configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 17:33:47.484: INFO: Waiting for pod pod-configmaps-8bd20487-fc6b-446f-b87e-b2cf49db2b79 to disappear
Jun  1 17:33:47.511: INFO: Pod pod-configmaps-8bd20487-fc6b-446f-b87e-b2cf49db2b79 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 17:33:47.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5610" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":91,"skipped":1792,"failed":0}

------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 125 lines ...
Jun  1 17:34:19.765: INFO: stderr: ""
Jun  1 17:34:19.765: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 17:34:19.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6512" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":292,"completed":92,"skipped":1792,"failed":0}
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-7ee930bf-72d1-4ddc-9329-0c0aabfd820d
STEP: Creating a pod to test consume configMaps
Jun  1 17:34:20.006: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2aa3828c-b869-4064-b7a2-f4d7cd10b517" in namespace "projected-2518" to be "Succeeded or Failed"
Jun  1 17:34:20.065: INFO: Pod "pod-projected-configmaps-2aa3828c-b869-4064-b7a2-f4d7cd10b517": Phase="Pending", Reason="", readiness=false. Elapsed: 59.595956ms
Jun  1 17:34:22.081: INFO: Pod "pod-projected-configmaps-2aa3828c-b869-4064-b7a2-f4d7cd10b517": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074947837s
Jun  1 17:34:24.097: INFO: Pod "pod-projected-configmaps-2aa3828c-b869-4064-b7a2-f4d7cd10b517": Phase="Running", Reason="", readiness=true. Elapsed: 4.091550476s
Jun  1 17:34:26.114: INFO: Pod "pod-projected-configmaps-2aa3828c-b869-4064-b7a2-f4d7cd10b517": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.107751692s
STEP: Saw pod success
Jun  1 17:34:26.114: INFO: Pod "pod-projected-configmaps-2aa3828c-b869-4064-b7a2-f4d7cd10b517" satisfied condition "Succeeded or Failed"
Jun  1 17:34:26.128: INFO: Trying to get logs from node kind-worker pod pod-projected-configmaps-2aa3828c-b869-4064-b7a2-f4d7cd10b517 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 17:34:26.175: INFO: Waiting for pod pod-projected-configmaps-2aa3828c-b869-4064-b7a2-f4d7cd10b517 to disappear
Jun  1 17:34:26.185: INFO: Pod pod-projected-configmaps-2aa3828c-b869-4064-b7a2-f4d7cd10b517 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Jun  1 17:34:26.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2518" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":93,"skipped":1796,"failed":0}
S
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Jun  1 17:34:26.210: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test env composition
Jun  1 17:34:26.313: INFO: Waiting up to 5m0s for pod "var-expansion-6cac1961-00cd-44f0-bd8f-2a318308c170" in namespace "var-expansion-9726" to be "Succeeded or Failed"
Jun  1 17:34:26.325: INFO: Pod "var-expansion-6cac1961-00cd-44f0-bd8f-2a318308c170": Phase="Pending", Reason="", readiness=false. Elapsed: 11.609901ms
Jun  1 17:34:28.349: INFO: Pod "var-expansion-6cac1961-00cd-44f0-bd8f-2a318308c170": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035859686s
Jun  1 17:34:30.379: INFO: Pod "var-expansion-6cac1961-00cd-44f0-bd8f-2a318308c170": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065800981s
STEP: Saw pod success
Jun  1 17:34:30.380: INFO: Pod "var-expansion-6cac1961-00cd-44f0-bd8f-2a318308c170" satisfied condition "Succeeded or Failed"
Jun  1 17:34:30.395: INFO: Trying to get logs from node kind-worker pod var-expansion-6cac1961-00cd-44f0-bd8f-2a318308c170 container dapi-container: <nil>
STEP: delete the pod
Jun  1 17:34:30.489: INFO: Waiting for pod var-expansion-6cac1961-00cd-44f0-bd8f-2a318308c170 to disappear
Jun  1 17:34:30.538: INFO: Pod var-expansion-6cac1961-00cd-44f0-bd8f-2a318308c170 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Jun  1 17:34:30.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9726" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":292,"completed":94,"skipped":1797,"failed":0}
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-secret-lj2p
STEP: Creating a pod to test atomic-volume-subpath
Jun  1 17:34:30.805: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-lj2p" in namespace "subpath-1332" to be "Succeeded or Failed"
Jun  1 17:34:30.827: INFO: Pod "pod-subpath-test-secret-lj2p": Phase="Pending", Reason="", readiness=false. Elapsed: 22.488796ms
Jun  1 17:34:32.844: INFO: Pod "pod-subpath-test-secret-lj2p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039156733s
Jun  1 17:34:34.864: INFO: Pod "pod-subpath-test-secret-lj2p": Phase="Running", Reason="", readiness=true. Elapsed: 4.058953403s
Jun  1 17:34:36.881: INFO: Pod "pod-subpath-test-secret-lj2p": Phase="Running", Reason="", readiness=true. Elapsed: 6.076035204s
Jun  1 17:34:38.900: INFO: Pod "pod-subpath-test-secret-lj2p": Phase="Running", Reason="", readiness=true. Elapsed: 8.094754645s
Jun  1 17:34:40.914: INFO: Pod "pod-subpath-test-secret-lj2p": Phase="Running", Reason="", readiness=true. Elapsed: 10.1088797s
... skipping 2 lines ...
Jun  1 17:34:46.960: INFO: Pod "pod-subpath-test-secret-lj2p": Phase="Running", Reason="", readiness=true. Elapsed: 16.155437313s
Jun  1 17:34:48.980: INFO: Pod "pod-subpath-test-secret-lj2p": Phase="Running", Reason="", readiness=true. Elapsed: 18.175025877s
Jun  1 17:34:50.998: INFO: Pod "pod-subpath-test-secret-lj2p": Phase="Running", Reason="", readiness=true. Elapsed: 20.192876427s
Jun  1 17:34:53.011: INFO: Pod "pod-subpath-test-secret-lj2p": Phase="Running", Reason="", readiness=true. Elapsed: 22.206547201s
Jun  1 17:34:55.051: INFO: Pod "pod-subpath-test-secret-lj2p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.245793016s
STEP: Saw pod success
Jun  1 17:34:55.051: INFO: Pod "pod-subpath-test-secret-lj2p" satisfied condition "Succeeded or Failed"
Jun  1 17:34:55.061: INFO: Trying to get logs from node kind-worker pod pod-subpath-test-secret-lj2p container test-container-subpath-secret-lj2p: <nil>
STEP: delete the pod
Jun  1 17:34:55.132: INFO: Waiting for pod pod-subpath-test-secret-lj2p to disappear
Jun  1 17:34:55.177: INFO: Pod pod-subpath-test-secret-lj2p no longer exists
STEP: Deleting pod pod-subpath-test-secret-lj2p
Jun  1 17:34:55.177: INFO: Deleting pod "pod-subpath-test-secret-lj2p" in namespace "subpath-1332"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Jun  1 17:34:55.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1332" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":292,"completed":95,"skipped":1802,"failed":0}
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 17:34:55.231: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jun  1 17:34:55.369: INFO: Waiting up to 5m0s for pod "pod-582e8d99-4f52-40e3-9e0b-ad426be0c2a3" in namespace "emptydir-943" to be "Succeeded or Failed"
Jun  1 17:34:55.398: INFO: Pod "pod-582e8d99-4f52-40e3-9e0b-ad426be0c2a3": Phase="Pending", Reason="", readiness=false. Elapsed: 28.720148ms
Jun  1 17:34:57.413: INFO: Pod "pod-582e8d99-4f52-40e3-9e0b-ad426be0c2a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043148212s
Jun  1 17:34:59.427: INFO: Pod "pod-582e8d99-4f52-40e3-9e0b-ad426be0c2a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05786065s
STEP: Saw pod success
Jun  1 17:34:59.427: INFO: Pod "pod-582e8d99-4f52-40e3-9e0b-ad426be0c2a3" satisfied condition "Succeeded or Failed"
Jun  1 17:34:59.435: INFO: Trying to get logs from node kind-worker pod pod-582e8d99-4f52-40e3-9e0b-ad426be0c2a3 container test-container: <nil>
STEP: delete the pod
Jun  1 17:34:59.506: INFO: Waiting for pod pod-582e8d99-4f52-40e3-9e0b-ad426be0c2a3 to disappear
Jun  1 17:34:59.522: INFO: Pod pod-582e8d99-4f52-40e3-9e0b-ad426be0c2a3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 17:34:59.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-943" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":96,"skipped":1808,"failed":0}
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 17:34:59.738: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c7cd6b72-e2f3-4242-8980-581fcf8c3ea3" in namespace "projected-6341" to be "Succeeded or Failed"
Jun  1 17:34:59.756: INFO: Pod "downwardapi-volume-c7cd6b72-e2f3-4242-8980-581fcf8c3ea3": Phase="Pending", Reason="", readiness=false. Elapsed: 17.914964ms
Jun  1 17:35:01.769: INFO: Pod "downwardapi-volume-c7cd6b72-e2f3-4242-8980-581fcf8c3ea3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030610159s
Jun  1 17:35:03.788: INFO: Pod "downwardapi-volume-c7cd6b72-e2f3-4242-8980-581fcf8c3ea3": Phase="Running", Reason="", readiness=true. Elapsed: 4.049888871s
Jun  1 17:35:05.807: INFO: Pod "downwardapi-volume-c7cd6b72-e2f3-4242-8980-581fcf8c3ea3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.069140895s
STEP: Saw pod success
Jun  1 17:35:05.808: INFO: Pod "downwardapi-volume-c7cd6b72-e2f3-4242-8980-581fcf8c3ea3" satisfied condition "Succeeded or Failed"
Jun  1 17:35:05.818: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-c7cd6b72-e2f3-4242-8980-581fcf8c3ea3 container client-container: <nil>
STEP: delete the pod
Jun  1 17:35:05.890: INFO: Waiting for pod downwardapi-volume-c7cd6b72-e2f3-4242-8980-581fcf8c3ea3 to disappear
Jun  1 17:35:05.917: INFO: Pod downwardapi-volume-c7cd6b72-e2f3-4242-8980-581fcf8c3ea3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Jun  1 17:35:05.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6341" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":292,"completed":97,"skipped":1812,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] Events
... skipping 16 lines ...
Jun  1 17:35:14.164: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  test/e2e/framework/framework.go:175
Jun  1 17:35:14.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-2830" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":292,"completed":98,"skipped":1820,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 24 lines ...
  test/e2e/framework/framework.go:175
Jun  1 17:35:26.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5242" for this suite.
STEP: Destroying namespace "webhook-5242-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":292,"completed":99,"skipped":1830,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 17:35:26.361: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on node default medium
Jun  1 17:35:26.541: INFO: Waiting up to 5m0s for pod "pod-6aefa3d8-b84b-4ffc-9350-4b93d09bc263" in namespace "emptydir-6999" to be "Succeeded or Failed"
Jun  1 17:35:26.589: INFO: Pod "pod-6aefa3d8-b84b-4ffc-9350-4b93d09bc263": Phase="Pending", Reason="", readiness=false. Elapsed: 47.552385ms
Jun  1 17:35:28.608: INFO: Pod "pod-6aefa3d8-b84b-4ffc-9350-4b93d09bc263": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066837076s
Jun  1 17:35:30.629: INFO: Pod "pod-6aefa3d8-b84b-4ffc-9350-4b93d09bc263": Phase="Running", Reason="", readiness=true. Elapsed: 4.087470713s
Jun  1 17:35:32.648: INFO: Pod "pod-6aefa3d8-b84b-4ffc-9350-4b93d09bc263": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.107258058s
STEP: Saw pod success
Jun  1 17:35:32.648: INFO: Pod "pod-6aefa3d8-b84b-4ffc-9350-4b93d09bc263" satisfied condition "Succeeded or Failed"
Jun  1 17:35:32.676: INFO: Trying to get logs from node kind-worker2 pod pod-6aefa3d8-b84b-4ffc-9350-4b93d09bc263 container test-container: <nil>
STEP: delete the pod
Jun  1 17:35:32.784: INFO: Waiting for pod pod-6aefa3d8-b84b-4ffc-9350-4b93d09bc263 to disappear
Jun  1 17:35:32.805: INFO: Pod pod-6aefa3d8-b84b-4ffc-9350-4b93d09bc263 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 17:35:32.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6999" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":100,"skipped":1858,"failed":0}
SSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 2 lines ...
Jun  1 17:35:32.841: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jun  1 17:35:37.137: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Jun  1 17:35:37.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8406" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":292,"completed":101,"skipped":1862,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 25 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 17:36:01.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4581" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":292,"completed":102,"skipped":1885,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  test/e2e/framework/framework.go:175
Jun  1 17:36:08.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8991" for this suite.
STEP: Destroying namespace "webhook-8991-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":292,"completed":103,"skipped":1891,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 17 lines ...
Jun  1 17:36:08.914: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-668 /api/v1/namespaces/watch-668/configmaps/e2e-watch-test-watch-closed 1454e7b5-b5ab-43d0-a0b6-c9b1a2c8a65a 13465 0 2020-06-01 17:36:08 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-06-01 17:36:08 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Jun  1 17:36:08.915: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-668 /api/v1/namespaces/watch-668/configmaps/e2e-watch-test-watch-closed 1454e7b5-b5ab-43d0-a0b6-c9b1a2c8a65a 13466 0 2020-06-01 17:36:08 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-06-01 17:36:08 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Jun  1 17:36:08.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-668" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":292,"completed":104,"skipped":1916,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 17:36:08.947: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jun  1 17:36:09.066: INFO: Waiting up to 5m0s for pod "pod-0f25019e-f673-4f4c-ab4a-51d88e93544f" in namespace "emptydir-5029" to be "Succeeded or Failed"
Jun  1 17:36:09.105: INFO: Pod "pod-0f25019e-f673-4f4c-ab4a-51d88e93544f": Phase="Pending", Reason="", readiness=false. Elapsed: 39.258032ms
Jun  1 17:36:11.150: INFO: Pod "pod-0f25019e-f673-4f4c-ab4a-51d88e93544f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084282975s
Jun  1 17:36:13.177: INFO: Pod "pod-0f25019e-f673-4f4c-ab4a-51d88e93544f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.111128874s
STEP: Saw pod success
Jun  1 17:36:13.178: INFO: Pod "pod-0f25019e-f673-4f4c-ab4a-51d88e93544f" satisfied condition "Succeeded or Failed"
Jun  1 17:36:13.184: INFO: Trying to get logs from node kind-worker pod pod-0f25019e-f673-4f4c-ab4a-51d88e93544f container test-container: <nil>
STEP: delete the pod
Jun  1 17:36:13.283: INFO: Waiting for pod pod-0f25019e-f673-4f4c-ab4a-51d88e93544f to disappear
Jun  1 17:36:13.298: INFO: Pod pod-0f25019e-f673-4f4c-ab4a-51d88e93544f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 17:36:13.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5029" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":105,"skipped":1958,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-5acd9440-ba76-476a-b32f-c02b78ffe936
STEP: Creating a pod to test consume configMaps
Jun  1 17:36:13.508: INFO: Waiting up to 5m0s for pod "pod-configmaps-8d8fe319-0a1e-42ac-b674-9eb3a8833421" in namespace "configmap-569" to be "Succeeded or Failed"
Jun  1 17:36:13.536: INFO: Pod "pod-configmaps-8d8fe319-0a1e-42ac-b674-9eb3a8833421": Phase="Pending", Reason="", readiness=false. Elapsed: 28.468147ms
Jun  1 17:36:15.555: INFO: Pod "pod-configmaps-8d8fe319-0a1e-42ac-b674-9eb3a8833421": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047440763s
Jun  1 17:36:17.576: INFO: Pod "pod-configmaps-8d8fe319-0a1e-42ac-b674-9eb3a8833421": Phase="Running", Reason="", readiness=true. Elapsed: 4.068674479s
Jun  1 17:36:19.589: INFO: Pod "pod-configmaps-8d8fe319-0a1e-42ac-b674-9eb3a8833421": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.081773782s
STEP: Saw pod success
Jun  1 17:36:19.589: INFO: Pod "pod-configmaps-8d8fe319-0a1e-42ac-b674-9eb3a8833421" satisfied condition "Succeeded or Failed"
Jun  1 17:36:19.609: INFO: Trying to get logs from node kind-worker pod pod-configmaps-8d8fe319-0a1e-42ac-b674-9eb3a8833421 container configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 17:36:19.660: INFO: Waiting for pod pod-configmaps-8d8fe319-0a1e-42ac-b674-9eb3a8833421 to disappear
Jun  1 17:36:19.665: INFO: Pod pod-configmaps-8d8fe319-0a1e-42ac-b674-9eb3a8833421 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 17:36:19.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-569" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":292,"completed":106,"skipped":1970,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 25 lines ...
  test/e2e/framework/framework.go:175
Jun  1 17:36:27.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5105" for this suite.
STEP: Destroying namespace "webhook-5105-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":292,"completed":107,"skipped":1988,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Aggregator
... skipping 24 lines ...
[AfterEach] [sig-api-machinery] Aggregator
  test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  test/e2e/framework/framework.go:175
Jun  1 17:36:56.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-3806" for this suite.
•{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":292,"completed":108,"skipped":1994,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating secret secrets-2493/secret-test-db5fa5ec-b4f8-48ec-9df2-205538ff63a5
STEP: Creating a pod to test consume secrets
Jun  1 17:36:56.856: INFO: Waiting up to 5m0s for pod "pod-configmaps-4bd8a849-4df6-4b7d-a461-6a9d5d98b34c" in namespace "secrets-2493" to be "Succeeded or Failed"
Jun  1 17:36:56.874: INFO: Pod "pod-configmaps-4bd8a849-4df6-4b7d-a461-6a9d5d98b34c": Phase="Pending", Reason="", readiness=false. Elapsed: 17.612358ms
Jun  1 17:36:58.899: INFO: Pod "pod-configmaps-4bd8a849-4df6-4b7d-a461-6a9d5d98b34c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043067324s
Jun  1 17:37:00.916: INFO: Pod "pod-configmaps-4bd8a849-4df6-4b7d-a461-6a9d5d98b34c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059405525s
Jun  1 17:37:02.933: INFO: Pod "pod-configmaps-4bd8a849-4df6-4b7d-a461-6a9d5d98b34c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.077226922s
STEP: Saw pod success
Jun  1 17:37:02.933: INFO: Pod "pod-configmaps-4bd8a849-4df6-4b7d-a461-6a9d5d98b34c" satisfied condition "Succeeded or Failed"
Jun  1 17:37:02.957: INFO: Trying to get logs from node kind-worker pod pod-configmaps-4bd8a849-4df6-4b7d-a461-6a9d5d98b34c container env-test: <nil>
STEP: delete the pod
Jun  1 17:37:03.036: INFO: Waiting for pod pod-configmaps-4bd8a849-4df6-4b7d-a461-6a9d5d98b34c to disappear
Jun  1 17:37:03.054: INFO: Pod pod-configmaps-4bd8a849-4df6-4b7d-a461-6a9d5d98b34c no longer exists
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
Jun  1 17:37:03.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2493" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":292,"completed":109,"skipped":2004,"failed":0}
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 17:37:03.253: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a598048d-0dc8-48c5-a1ce-a65478b70d2c" in namespace "downward-api-8049" to be "Succeeded or Failed"
Jun  1 17:37:03.270: INFO: Pod "downwardapi-volume-a598048d-0dc8-48c5-a1ce-a65478b70d2c": Phase="Pending", Reason="", readiness=false. Elapsed: 17.36213ms
Jun  1 17:37:05.282: INFO: Pod "downwardapi-volume-a598048d-0dc8-48c5-a1ce-a65478b70d2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028979637s
Jun  1 17:37:07.297: INFO: Pod "downwardapi-volume-a598048d-0dc8-48c5-a1ce-a65478b70d2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044540461s
STEP: Saw pod success
Jun  1 17:37:07.300: INFO: Pod "downwardapi-volume-a598048d-0dc8-48c5-a1ce-a65478b70d2c" satisfied condition "Succeeded or Failed"
Jun  1 17:37:07.313: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-a598048d-0dc8-48c5-a1ce-a65478b70d2c container client-container: <nil>
STEP: delete the pod
Jun  1 17:37:07.400: INFO: Waiting for pod downwardapi-volume-a598048d-0dc8-48c5-a1ce-a65478b70d2c to disappear
Jun  1 17:37:07.417: INFO: Pod downwardapi-volume-a598048d-0dc8-48c5-a1ce-a65478b70d2c no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Jun  1 17:37:07.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8049" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":292,"completed":110,"skipped":2010,"failed":0}
SSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 190 lines ...
Jun  1 17:37:26.136: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jun  1 17:37:26.136: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 17:37:26.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-788" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":292,"completed":111,"skipped":2016,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 17:37:26.458: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7fe62b5f-a520-4641-affe-cb7925b2a856" in namespace "projected-6559" to be "Succeeded or Failed"
Jun  1 17:37:26.498: INFO: Pod "downwardapi-volume-7fe62b5f-a520-4641-affe-cb7925b2a856": Phase="Pending", Reason="", readiness=false. Elapsed: 39.85215ms
Jun  1 17:37:28.517: INFO: Pod "downwardapi-volume-7fe62b5f-a520-4641-affe-cb7925b2a856": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059167197s
Jun  1 17:37:30.534: INFO: Pod "downwardapi-volume-7fe62b5f-a520-4641-affe-cb7925b2a856": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075930726s
Jun  1 17:37:32.547: INFO: Pod "downwardapi-volume-7fe62b5f-a520-4641-affe-cb7925b2a856": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.088991578s
STEP: Saw pod success
Jun  1 17:37:32.547: INFO: Pod "downwardapi-volume-7fe62b5f-a520-4641-affe-cb7925b2a856" satisfied condition "Succeeded or Failed"
Jun  1 17:37:32.564: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-7fe62b5f-a520-4641-affe-cb7925b2a856 container client-container: <nil>
STEP: delete the pod
Jun  1 17:37:32.631: INFO: Waiting for pod downwardapi-volume-7fe62b5f-a520-4641-affe-cb7925b2a856 to disappear
Jun  1 17:37:32.640: INFO: Pod downwardapi-volume-7fe62b5f-a520-4641-affe-cb7925b2a856 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Jun  1 17:37:32.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6559" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":292,"completed":112,"skipped":2039,"failed":0}
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 33 lines ...

W0601 17:37:39.053874   12365 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Jun  1 17:37:39.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-338" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":292,"completed":113,"skipped":2041,"failed":0}

------------------------------
[sig-network] Services 
  should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 72 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 17:38:11.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1425" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":292,"completed":114,"skipped":2041,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicaSet
... skipping 12 lines ...
Jun  1 17:38:16.593: INFO: Trying to dial the pod
Jun  1 17:38:21.644: INFO: Controller my-hostname-basic-8af58f0c-b5f7-427d-9c1c-7553acfbf8a3: Got expected result from replica 1 [my-hostname-basic-8af58f0c-b5f7-427d-9c1c-7553acfbf8a3-x4w4t]: "my-hostname-basic-8af58f0c-b5f7-427d-9c1c-7553acfbf8a3-x4w4t", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  test/e2e/framework/framework.go:175
Jun  1 17:38:21.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-9703" for this suite.
•{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":292,"completed":115,"skipped":2054,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
Jun  1 17:38:21.793: INFO: >>> kubeConfig: /root/.kube/kind-test-config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 17:38:23.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8105" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":292,"completed":116,"skipped":2102,"failed":0}
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 9 lines ...
[It] should be possible to delete [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Jun  1 17:38:23.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4896" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":292,"completed":117,"skipped":2104,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 8 lines ...
Jun  1 17:38:23.740: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jun  1 17:38:27.689: INFO: >>> kubeConfig: /root/.kube/kind-test-config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 17:38:43.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7919" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":292,"completed":118,"skipped":2112,"failed":0}
SSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-35c93873-16a8-4870-96a1-b06af74bec09
STEP: Creating a pod to test consume secrets
Jun  1 17:38:43.694: INFO: Waiting up to 5m0s for pod "pod-secrets-5a7c8953-56d2-41bd-90e1-bc6bea995453" in namespace "secrets-7848" to be "Succeeded or Failed"
Jun  1 17:38:43.709: INFO: Pod "pod-secrets-5a7c8953-56d2-41bd-90e1-bc6bea995453": Phase="Pending", Reason="", readiness=false. Elapsed: 15.190517ms
Jun  1 17:38:45.728: INFO: Pod "pod-secrets-5a7c8953-56d2-41bd-90e1-bc6bea995453": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033582783s
Jun  1 17:38:47.736: INFO: Pod "pod-secrets-5a7c8953-56d2-41bd-90e1-bc6bea995453": Phase="Running", Reason="", readiness=true. Elapsed: 4.042238325s
Jun  1 17:38:49.757: INFO: Pod "pod-secrets-5a7c8953-56d2-41bd-90e1-bc6bea995453": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.062718655s
STEP: Saw pod success
Jun  1 17:38:49.758: INFO: Pod "pod-secrets-5a7c8953-56d2-41bd-90e1-bc6bea995453" satisfied condition "Succeeded or Failed"
Jun  1 17:38:49.770: INFO: Trying to get logs from node kind-worker pod pod-secrets-5a7c8953-56d2-41bd-90e1-bc6bea995453 container secret-volume-test: <nil>
STEP: delete the pod
Jun  1 17:38:49.817: INFO: Waiting for pod pod-secrets-5a7c8953-56d2-41bd-90e1-bc6bea995453 to disappear
Jun  1 17:38:49.835: INFO: Pod pod-secrets-5a7c8953-56d2-41bd-90e1-bc6bea995453 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Jun  1 17:38:49.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7848" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":119,"skipped":2117,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-map-f1288fe1-7160-459b-bc42-73ad483a75db
STEP: Creating a pod to test consume secrets
Jun  1 17:38:50.029: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f7495ae1-8fd9-4724-864c-ef047b2b7c0c" in namespace "projected-3885" to be "Succeeded or Failed"
Jun  1 17:38:50.047: INFO: Pod "pod-projected-secrets-f7495ae1-8fd9-4724-864c-ef047b2b7c0c": Phase="Pending", Reason="", readiness=false. Elapsed: 17.009693ms
Jun  1 17:38:52.064: INFO: Pod "pod-projected-secrets-f7495ae1-8fd9-4724-864c-ef047b2b7c0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034273004s
Jun  1 17:38:54.091: INFO: Pod "pod-projected-secrets-f7495ae1-8fd9-4724-864c-ef047b2b7c0c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061518797s
Jun  1 17:38:56.109: INFO: Pod "pod-projected-secrets-f7495ae1-8fd9-4724-864c-ef047b2b7c0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.07954764s
STEP: Saw pod success
Jun  1 17:38:56.109: INFO: Pod "pod-projected-secrets-f7495ae1-8fd9-4724-864c-ef047b2b7c0c" satisfied condition "Succeeded or Failed"
Jun  1 17:38:56.118: INFO: Trying to get logs from node kind-worker pod pod-projected-secrets-f7495ae1-8fd9-4724-864c-ef047b2b7c0c container projected-secret-volume-test: <nil>
STEP: delete the pod
Jun  1 17:38:56.157: INFO: Waiting for pod pod-projected-secrets-f7495ae1-8fd9-4724-864c-ef047b2b7c0c to disappear
Jun  1 17:38:56.171: INFO: Pod pod-projected-secrets-f7495ae1-8fd9-4724-864c-ef047b2b7c0c no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Jun  1 17:38:56.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3885" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":292,"completed":120,"skipped":2132,"failed":0}
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 40 lines ...
Jun  1 17:39:06.762: INFO: Deleting pod "simpletest-rc-to-be-deleted-dfvlt" in namespace "gc-5117"
Jun  1 17:39:06.830: INFO: Deleting pod "simpletest-rc-to-be-deleted-hnqbp" in namespace "gc-5117"
[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Jun  1 17:39:06.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5117" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":292,"completed":121,"skipped":2135,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  test/e2e/framework/framework.go:175
Jun  1 17:39:14.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6254" for this suite.
STEP: Destroying namespace "webhook-6254-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":292,"completed":122,"skipped":2163,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 17:39:15.272: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on node default medium
Jun  1 17:39:15.421: INFO: Waiting up to 5m0s for pod "pod-1ede8217-ba47-4f08-aae1-b6badf8ae2c2" in namespace "emptydir-3320" to be "Succeeded or Failed"
Jun  1 17:39:15.439: INFO: Pod "pod-1ede8217-ba47-4f08-aae1-b6badf8ae2c2": Phase="Pending", Reason="", readiness=false. Elapsed: 17.997063ms
Jun  1 17:39:17.460: INFO: Pod "pod-1ede8217-ba47-4f08-aae1-b6badf8ae2c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038345947s
Jun  1 17:39:19.470: INFO: Pod "pod-1ede8217-ba47-4f08-aae1-b6badf8ae2c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04835732s
STEP: Saw pod success
Jun  1 17:39:19.470: INFO: Pod "pod-1ede8217-ba47-4f08-aae1-b6badf8ae2c2" satisfied condition "Succeeded or Failed"
Jun  1 17:39:19.477: INFO: Trying to get logs from node kind-worker pod pod-1ede8217-ba47-4f08-aae1-b6badf8ae2c2 container test-container: <nil>
STEP: delete the pod
Jun  1 17:39:19.508: INFO: Waiting for pod pod-1ede8217-ba47-4f08-aae1-b6badf8ae2c2 to disappear
Jun  1 17:39:19.514: INFO: Pod pod-1ede8217-ba47-4f08-aae1-b6badf8ae2c2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 17:39:19.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3320" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":123,"skipped":2179,"failed":0}
SS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 17 lines ...
Jun  1 17:39:24.140: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jun  1 17:39:24.662: INFO: Deleting pod dns-7137...
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Jun  1 17:39:24.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7137" for this suite.
•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":292,"completed":124,"skipped":2181,"failed":0}
SS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 34 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
Jun  1 17:39:45.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-351" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":292,"completed":125,"skipped":2183,"failed":0}
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-map-423dd5ee-1cd4-4381-86ad-44557142788c
STEP: Creating a pod to test consume secrets
Jun  1 17:39:45.449: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f2534ca4-8960-4cc2-9672-34806e4c2a0d" in namespace "projected-908" to be "Succeeded or Failed"
Jun  1 17:39:45.458: INFO: Pod "pod-projected-secrets-f2534ca4-8960-4cc2-9672-34806e4c2a0d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.936753ms
Jun  1 17:39:47.477: INFO: Pod "pod-projected-secrets-f2534ca4-8960-4cc2-9672-34806e4c2a0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027978817s
Jun  1 17:39:49.502: INFO: Pod "pod-projected-secrets-f2534ca4-8960-4cc2-9672-34806e4c2a0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052964328s
STEP: Saw pod success
Jun  1 17:39:49.502: INFO: Pod "pod-projected-secrets-f2534ca4-8960-4cc2-9672-34806e4c2a0d" satisfied condition "Succeeded or Failed"
Jun  1 17:39:49.525: INFO: Trying to get logs from node kind-worker2 pod pod-projected-secrets-f2534ca4-8960-4cc2-9672-34806e4c2a0d container projected-secret-volume-test: <nil>
STEP: delete the pod
Jun  1 17:39:49.646: INFO: Waiting for pod pod-projected-secrets-f2534ca4-8960-4cc2-9672-34806e4c2a0d to disappear
Jun  1 17:39:49.664: INFO: Pod pod-projected-secrets-f2534ca4-8960-4cc2-9672-34806e4c2a0d no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Jun  1 17:39:49.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-908" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":126,"skipped":2187,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 25 lines ...
Jun  1 17:40:12.061: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jun  1 17:40:12.076: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
Jun  1 17:40:12.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-5448" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":292,"completed":127,"skipped":2207,"failed":0}
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-c629b46e-d2e3-4d2e-91e6-8b7e551a7184
STEP: Creating a pod to test consume secrets
Jun  1 17:40:12.242: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8502e6f2-9985-4da6-9e4c-e7ed69c1c662" in namespace "projected-6711" to be "Succeeded or Failed"
Jun  1 17:40:12.257: INFO: Pod "pod-projected-secrets-8502e6f2-9985-4da6-9e4c-e7ed69c1c662": Phase="Pending", Reason="", readiness=false. Elapsed: 14.672176ms
Jun  1 17:40:14.283: INFO: Pod "pod-projected-secrets-8502e6f2-9985-4da6-9e4c-e7ed69c1c662": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041628424s
Jun  1 17:40:16.305: INFO: Pod "pod-projected-secrets-8502e6f2-9985-4da6-9e4c-e7ed69c1c662": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063206413s
STEP: Saw pod success
Jun  1 17:40:16.305: INFO: Pod "pod-projected-secrets-8502e6f2-9985-4da6-9e4c-e7ed69c1c662" satisfied condition "Succeeded or Failed"
Jun  1 17:40:16.329: INFO: Trying to get logs from node kind-worker pod pod-projected-secrets-8502e6f2-9985-4da6-9e4c-e7ed69c1c662 container projected-secret-volume-test: <nil>
STEP: delete the pod
Jun  1 17:40:16.425: INFO: Waiting for pod pod-projected-secrets-8502e6f2-9985-4da6-9e4c-e7ed69c1c662 to disappear
Jun  1 17:40:16.445: INFO: Pod pod-projected-secrets-8502e6f2-9985-4da6-9e4c-e7ed69c1c662 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Jun  1 17:40:16.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6711" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":128,"skipped":2208,"failed":0}
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-d2757166-cba1-473a-9887-50125463347b
STEP: Creating a pod to test consume configMaps
Jun  1 17:40:16.604: INFO: Waiting up to 5m0s for pod "pod-configmaps-6c6116c7-5dce-48c9-aea2-ab93a42e87ec" in namespace "configmap-1905" to be "Succeeded or Failed"
Jun  1 17:40:16.613: INFO: Pod "pod-configmaps-6c6116c7-5dce-48c9-aea2-ab93a42e87ec": Phase="Pending", Reason="", readiness=false. Elapsed: 8.725474ms
Jun  1 17:40:18.637: INFO: Pod "pod-configmaps-6c6116c7-5dce-48c9-aea2-ab93a42e87ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033426037s
Jun  1 17:40:20.660: INFO: Pod "pod-configmaps-6c6116c7-5dce-48c9-aea2-ab93a42e87ec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056169154s
Jun  1 17:40:22.675: INFO: Pod "pod-configmaps-6c6116c7-5dce-48c9-aea2-ab93a42e87ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.071537181s
STEP: Saw pod success
Jun  1 17:40:22.676: INFO: Pod "pod-configmaps-6c6116c7-5dce-48c9-aea2-ab93a42e87ec" satisfied condition "Succeeded or Failed"
Jun  1 17:40:22.701: INFO: Trying to get logs from node kind-worker pod pod-configmaps-6c6116c7-5dce-48c9-aea2-ab93a42e87ec container configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 17:40:22.797: INFO: Waiting for pod pod-configmaps-6c6116c7-5dce-48c9-aea2-ab93a42e87ec to disappear
Jun  1 17:40:22.815: INFO: Pod pod-configmaps-6c6116c7-5dce-48c9-aea2-ab93a42e87ec no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 17:40:22.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1905" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":129,"skipped":2209,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 27 lines ...
  test/e2e/framework/framework.go:175
Jun  1 17:40:30.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3617" for this suite.
STEP: Destroying namespace "webhook-3617-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":292,"completed":130,"skipped":2228,"failed":0}
SSSSSSSSS
------------------------------
[sig-network] Services 
  should have session affinity work for NodePort service [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 52 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 17:41:01.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6717" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":292,"completed":131,"skipped":2237,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 94 lines ...
Jun  1 17:41:31.142: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7377/pods","resourceVersion":"16140"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Jun  1 17:41:31.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7377" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":292,"completed":132,"skipped":2246,"failed":0}
SSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 28 lines ...
Jun  1 17:41:40.570: INFO: Pod "test-rolling-update-deployment-df7bb669b-g2nz2" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-df7bb669b-g2nz2 test-rolling-update-deployment-df7bb669b- deployment-7605 /api/v1/namespaces/deployment-7605/pods/test-rolling-update-deployment-df7bb669b-g2nz2 d3d7d144-5eaa-4b7e-8cb8-b97e2142973a 16222 0 2020-06-01 17:41:36 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:df7bb669b] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-df7bb669b 3eebd96d-58fb-481a-ad48-f9bdf3192124 0xc002dddba0 0xc002dddba1}] []  [{kube-controller-manager Update v1 2020-06-01 17:41:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eebd96d-58fb-481a-ad48-f9bdf3192124\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-01 17:41:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.83\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j2rrh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j2rrh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j2rrh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 17:41:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 17:41:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 17:41:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 17:41:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.2.83,StartTime:2020-06-01 17:41:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-01 17:41:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://a5a33e931ee04369fb17aaf4714e1271ed2bce694a9c8928d997202a4e8c6ff6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.83,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Jun  1 17:41:40.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7605" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":292,"completed":133,"skipped":2251,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 25 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 17:42:01.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4355" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":292,"completed":134,"skipped":2294,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 140 lines ...
Jun  1 17:42:41.351: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1583/pods","resourceVersion":"16622"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Jun  1 17:42:41.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1583" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":292,"completed":135,"skipped":2328,"failed":0}
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 12 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-8586
STEP: Creating statefulset with conflicting port in namespace statefulset-8586
STEP: Waiting until pod test-pod will start running in namespace statefulset-8586
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-8586
Jun  1 17:42:47.606: INFO: Observed stateful pod in namespace: statefulset-8586, name: ss-0, uid: 358499f9-8c43-482c-9390-19127202bcc7, status phase: Pending. Waiting for statefulset controller to delete.
Jun  1 17:42:47.829: INFO: Observed stateful pod in namespace: statefulset-8586, name: ss-0, uid: 358499f9-8c43-482c-9390-19127202bcc7, status phase: Failed. Waiting for statefulset controller to delete.
Jun  1 17:42:47.858: INFO: Observed stateful pod in namespace: statefulset-8586, name: ss-0, uid: 358499f9-8c43-482c-9390-19127202bcc7, status phase: Failed. Waiting for statefulset controller to delete.
Jun  1 17:42:47.884: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-8586
STEP: Removing pod with conflicting port in namespace statefulset-8586
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-8586 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/apps/statefulset.go:114
Jun  1 17:42:54.042: INFO: Deleting all statefulset in ns statefulset-8586
Jun  1 17:42:54.057: INFO: Scaling statefulset ss to 0
Jun  1 17:43:04.121: INFO: Waiting for statefulset status.replicas updated to 0
Jun  1 17:43:04.138: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Jun  1 17:43:04.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8586" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":292,"completed":136,"skipped":2331,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 17:43:04.433: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4586ee5e-c792-441e-a6b6-837f5cf85676" in namespace "projected-6332" to be "Succeeded or Failed"
Jun  1 17:43:04.447: INFO: Pod "downwardapi-volume-4586ee5e-c792-441e-a6b6-837f5cf85676": Phase="Pending", Reason="", readiness=false. Elapsed: 14.164835ms
Jun  1 17:43:06.458: INFO: Pod "downwardapi-volume-4586ee5e-c792-441e-a6b6-837f5cf85676": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024613392s
Jun  1 17:43:08.476: INFO: Pod "downwardapi-volume-4586ee5e-c792-441e-a6b6-837f5cf85676": Phase="Running", Reason="", readiness=true. Elapsed: 4.04296883s
Jun  1 17:43:10.497: INFO: Pod "downwardapi-volume-4586ee5e-c792-441e-a6b6-837f5cf85676": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.063698247s
STEP: Saw pod success
Jun  1 17:43:10.497: INFO: Pod "downwardapi-volume-4586ee5e-c792-441e-a6b6-837f5cf85676" satisfied condition "Succeeded or Failed"
Jun  1 17:43:10.504: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-4586ee5e-c792-441e-a6b6-837f5cf85676 container client-container: <nil>
STEP: delete the pod
Jun  1 17:43:10.601: INFO: Waiting for pod downwardapi-volume-4586ee5e-c792-441e-a6b6-837f5cf85676 to disappear
Jun  1 17:43:10.620: INFO: Pod downwardapi-volume-4586ee5e-c792-441e-a6b6-837f5cf85676 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Jun  1 17:43:10.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6332" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":292,"completed":137,"skipped":2340,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  test/e2e/framework/framework.go:175
Jun  1 17:43:17.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9363" for this suite.
STEP: Destroying namespace "webhook-9363-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":292,"completed":138,"skipped":2347,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
... skipping 9 lines ...
STEP: creating the pod
Jun  1 17:43:18.124: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
Jun  1 17:43:25.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2026" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":292,"completed":139,"skipped":2359,"failed":0}
S
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 26 lines ...
Jun  1 17:43:46.802: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jun  1 17:43:47.248: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
Jun  1 17:43:47.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-446" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":292,"completed":140,"skipped":2360,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 15 lines ...
STEP: deleting the pod gracefully
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Jun  1 17:44:01.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3063" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":292,"completed":141,"skipped":2402,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 14 lines ...
Jun  1 17:44:06.585: INFO: Trying to dial the pod
Jun  1 17:44:11.648: INFO: Controller my-hostname-basic-1ca2e3cd-4fd7-4255-aae5-e605a23a4c57: Got expected result from replica 1 [my-hostname-basic-1ca2e3cd-4fd7-4255-aae5-e605a23a4c57-7pqqb]: "my-hostname-basic-1ca2e3cd-4fd7-4255-aae5-e605a23a4c57-7pqqb", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
Jun  1 17:44:11.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3152" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":292,"completed":142,"skipped":2419,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 15 lines ...
  test/e2e/framework/framework.go:175
Jun  1 17:44:18.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-5932" for this suite.
STEP: Destroying namespace "nsdeletetest-2592" for this suite.
Jun  1 17:44:18.108: INFO: Namespace nsdeletetest-2592 was already deleted
STEP: Destroying namespace "nsdeletetest-8675" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":292,"completed":143,"skipped":2425,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
Jun  1 17:44:18.229: INFO: >>> kubeConfig: /root/.kube/kind-test-config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 17:44:19.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6305" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":292,"completed":144,"skipped":2450,"failed":0}
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-configmap-vhrl
STEP: Creating a pod to test atomic-volume-subpath
Jun  1 17:44:19.552: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-vhrl" in namespace "subpath-7944" to be "Succeeded or Failed"
Jun  1 17:44:19.570: INFO: Pod "pod-subpath-test-configmap-vhrl": Phase="Pending", Reason="", readiness=false. Elapsed: 17.309536ms
Jun  1 17:44:21.586: INFO: Pod "pod-subpath-test-configmap-vhrl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033471792s
Jun  1 17:44:23.616: INFO: Pod "pod-subpath-test-configmap-vhrl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063267799s
Jun  1 17:44:25.630: INFO: Pod "pod-subpath-test-configmap-vhrl": Phase="Running", Reason="", readiness=true. Elapsed: 6.0773861s
Jun  1 17:44:27.648: INFO: Pod "pod-subpath-test-configmap-vhrl": Phase="Running", Reason="", readiness=true. Elapsed: 8.095154574s
Jun  1 17:44:29.656: INFO: Pod "pod-subpath-test-configmap-vhrl": Phase="Running", Reason="", readiness=true. Elapsed: 10.103590652s
... skipping 3 lines ...
Jun  1 17:44:37.733: INFO: Pod "pod-subpath-test-configmap-vhrl": Phase="Running", Reason="", readiness=true. Elapsed: 18.181074842s
Jun  1 17:44:39.756: INFO: Pod "pod-subpath-test-configmap-vhrl": Phase="Running", Reason="", readiness=true. Elapsed: 20.203476457s
Jun  1 17:44:41.773: INFO: Pod "pod-subpath-test-configmap-vhrl": Phase="Running", Reason="", readiness=true. Elapsed: 22.220214685s
Jun  1 17:44:43.802: INFO: Pod "pod-subpath-test-configmap-vhrl": Phase="Running", Reason="", readiness=true. Elapsed: 24.249522796s
Jun  1 17:44:45.817: INFO: Pod "pod-subpath-test-configmap-vhrl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.264893161s
STEP: Saw pod success
Jun  1 17:44:45.817: INFO: Pod "pod-subpath-test-configmap-vhrl" satisfied condition "Succeeded or Failed"
Jun  1 17:44:45.841: INFO: Trying to get logs from node kind-worker pod pod-subpath-test-configmap-vhrl container test-container-subpath-configmap-vhrl: <nil>
STEP: delete the pod
Jun  1 17:44:45.952: INFO: Waiting for pod pod-subpath-test-configmap-vhrl to disappear
Jun  1 17:44:45.966: INFO: Pod pod-subpath-test-configmap-vhrl no longer exists
STEP: Deleting pod pod-subpath-test-configmap-vhrl
Jun  1 17:44:45.966: INFO: Deleting pod "pod-subpath-test-configmap-vhrl" in namespace "subpath-7944"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Jun  1 17:44:45.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7944" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":292,"completed":145,"skipped":2453,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 12 lines ...
STEP: Creating secret with name s-test-opt-create-f7c14285-1bfe-420b-8e7d-442a7f3d554f
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Jun  1 17:46:25.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2389" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":146,"skipped":2470,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 17:46:26.029: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jun  1 17:46:26.189: INFO: Waiting up to 5m0s for pod "pod-21495bcf-21ed-4b3a-866b-114030f27ea6" in namespace "emptydir-368" to be "Succeeded or Failed"
Jun  1 17:46:26.208: INFO: Pod "pod-21495bcf-21ed-4b3a-866b-114030f27ea6": Phase="Pending", Reason="", readiness=false. Elapsed: 18.326702ms
Jun  1 17:46:28.220: INFO: Pod "pod-21495bcf-21ed-4b3a-866b-114030f27ea6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029688443s
Jun  1 17:46:30.251: INFO: Pod "pod-21495bcf-21ed-4b3a-866b-114030f27ea6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060601813s
STEP: Saw pod success
Jun  1 17:46:30.252: INFO: Pod "pod-21495bcf-21ed-4b3a-866b-114030f27ea6" satisfied condition "Succeeded or Failed"
Jun  1 17:46:30.259: INFO: Trying to get logs from node kind-worker2 pod pod-21495bcf-21ed-4b3a-866b-114030f27ea6 container test-container: <nil>
STEP: delete the pod
Jun  1 17:46:30.393: INFO: Waiting for pod pod-21495bcf-21ed-4b3a-866b-114030f27ea6 to disappear
Jun  1 17:46:30.404: INFO: Pod pod-21495bcf-21ed-4b3a-866b-114030f27ea6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 17:46:30.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-368" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":147,"skipped":2472,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 20 lines ...
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Jun  1 17:47:04.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2243" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":292,"completed":148,"skipped":2505,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 28 lines ...
Jun  1 17:47:29.573: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jun  1 17:47:29.975: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
Jun  1 17:47:29.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7412" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":292,"completed":149,"skipped":2524,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] version v1
... skipping 347 lines ...
Jun  1 17:47:45.490: INFO: Deleting ReplicationController proxy-service-d8grj took: 35.054541ms
Jun  1 17:47:45.890: INFO: Terminating ReplicationController proxy-service-d8grj pods took: 400.351056ms
[AfterEach] version v1
  test/e2e/framework/framework.go:175
Jun  1 17:47:51.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-5721" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":292,"completed":150,"skipped":2536,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  1 17:47:51.324: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating the pod with failed condition
STEP: updating the pod
Jun  1 17:49:52.022: INFO: Successfully updated pod "var-expansion-2fcafea3-19c1-4498-87d2-3b1fcfbc8d49"
STEP: waiting for pod running
STEP: deleting the pod gracefully
Jun  1 17:49:54.060: INFO: Deleting pod "var-expansion-2fcafea3-19c1-4498-87d2-3b1fcfbc8d49" in namespace "var-expansion-2409"
Jun  1 17:49:54.109: INFO: Wait up to 5m0s for pod "var-expansion-2fcafea3-19c1-4498-87d2-3b1fcfbc8d49" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Jun  1 17:50:32.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2409" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":292,"completed":151,"skipped":2561,"failed":0}
SSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 24 lines ...
Jun  1 17:50:33.115: INFO: created pod pod-service-account-nomountsa-nomountspec
Jun  1 17:50:33.115: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  test/e2e/framework/framework.go:175
Jun  1 17:50:33.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-3846" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":292,"completed":152,"skipped":2564,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
Jun  1 17:50:39.508: INFO: Initial restart count of pod liveness-8b877003-3e65-49b7-9346-34727b96111c is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Jun  1 17:54:41.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-673" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":292,"completed":153,"skipped":2576,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 17:54:41.611: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jun  1 17:54:41.761: INFO: Waiting up to 5m0s for pod "pod-5fd9d9fa-6276-4c61-a47a-45ca0d72bfbf" in namespace "emptydir-1033" to be "Succeeded or Failed"
Jun  1 17:54:41.768: INFO: Pod "pod-5fd9d9fa-6276-4c61-a47a-45ca0d72bfbf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.501445ms
Jun  1 17:54:43.792: INFO: Pod "pod-5fd9d9fa-6276-4c61-a47a-45ca0d72bfbf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030724666s
Jun  1 17:54:45.805: INFO: Pod "pod-5fd9d9fa-6276-4c61-a47a-45ca0d72bfbf": Phase="Running", Reason="", readiness=true. Elapsed: 4.043278757s
Jun  1 17:54:47.822: INFO: Pod "pod-5fd9d9fa-6276-4c61-a47a-45ca0d72bfbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.060720402s
STEP: Saw pod success
Jun  1 17:54:47.823: INFO: Pod "pod-5fd9d9fa-6276-4c61-a47a-45ca0d72bfbf" satisfied condition "Succeeded or Failed"
Jun  1 17:54:47.836: INFO: Trying to get logs from node kind-worker pod pod-5fd9d9fa-6276-4c61-a47a-45ca0d72bfbf container test-container: <nil>
STEP: delete the pod
Jun  1 17:54:47.957: INFO: Waiting for pod pod-5fd9d9fa-6276-4c61-a47a-45ca0d72bfbf to disappear
Jun  1 17:54:47.978: INFO: Pod pod-5fd9d9fa-6276-4c61-a47a-45ca0d72bfbf no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 17:54:47.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1033" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":154,"skipped":2606,"failed":0}
SSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a volume subpath [sig-storage] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Jun  1 17:54:48.024: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a volume subpath [sig-storage] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in volume subpath
Jun  1 17:54:48.199: INFO: Waiting up to 5m0s for pod "var-expansion-28d94ae1-3215-4074-b692-1a53ea8abc66" in namespace "var-expansion-6797" to be "Succeeded or Failed"
Jun  1 17:54:48.213: INFO: Pod "var-expansion-28d94ae1-3215-4074-b692-1a53ea8abc66": Phase="Pending", Reason="", readiness=false. Elapsed: 14.482995ms
Jun  1 17:54:50.230: INFO: Pod "var-expansion-28d94ae1-3215-4074-b692-1a53ea8abc66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031277924s
Jun  1 17:54:52.259: INFO: Pod "var-expansion-28d94ae1-3215-4074-b692-1a53ea8abc66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060514253s
STEP: Saw pod success
Jun  1 17:54:52.259: INFO: Pod "var-expansion-28d94ae1-3215-4074-b692-1a53ea8abc66" satisfied condition "Succeeded or Failed"
Jun  1 17:54:52.271: INFO: Trying to get logs from node kind-worker pod var-expansion-28d94ae1-3215-4074-b692-1a53ea8abc66 container dapi-container: <nil>
STEP: delete the pod
Jun  1 17:54:52.353: INFO: Waiting for pod var-expansion-28d94ae1-3215-4074-b692-1a53ea8abc66 to disappear
Jun  1 17:54:52.369: INFO: Pod var-expansion-28d94ae1-3215-4074-b692-1a53ea8abc66 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Jun  1 17:54:52.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6797" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":292,"completed":155,"skipped":2612,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
Jun  1 17:54:56.697: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Jun  1 17:54:56.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8181" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":292,"completed":156,"skipped":2672,"failed":0}

------------------------------
[k8s.io] Variable Expansion 
  should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 19 lines ...
Jun  1 17:55:04.441: INFO: Deleting pod "var-expansion-7139af40-c392-4383-ba10-59d3703d00c1" in namespace "var-expansion-2946"
Jun  1 17:55:04.455: INFO: Wait up to 5m0s for pod "var-expansion-7139af40-c392-4383-ba10-59d3703d00c1" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Jun  1 17:55:42.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2946" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":292,"completed":157,"skipped":2672,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
... skipping 9 lines ...
STEP: creating the pod
Jun  1 17:55:42.662: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
Jun  1 17:55:59.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8703" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":292,"completed":158,"skipped":2684,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-d7e445fd-eda5-4a28-a0e8-e69dc844925a
STEP: Creating a pod to test consume configMaps
Jun  1 17:56:00.056: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0edcc2f0-83a5-4236-90ae-a97fbf21788f" in namespace "projected-7578" to be "Succeeded or Failed"
Jun  1 17:56:00.079: INFO: Pod "pod-projected-configmaps-0edcc2f0-83a5-4236-90ae-a97fbf21788f": Phase="Pending", Reason="", readiness=false. Elapsed: 23.343961ms
Jun  1 17:56:02.094: INFO: Pod "pod-projected-configmaps-0edcc2f0-83a5-4236-90ae-a97fbf21788f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037827355s
Jun  1 17:56:04.118: INFO: Pod "pod-projected-configmaps-0edcc2f0-83a5-4236-90ae-a97fbf21788f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062284702s
Jun  1 17:56:06.134: INFO: Pod "pod-projected-configmaps-0edcc2f0-83a5-4236-90ae-a97fbf21788f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.078379519s
STEP: Saw pod success
Jun  1 17:56:06.135: INFO: Pod "pod-projected-configmaps-0edcc2f0-83a5-4236-90ae-a97fbf21788f" satisfied condition "Succeeded or Failed"
Jun  1 17:56:06.152: INFO: Trying to get logs from node kind-worker2 pod pod-projected-configmaps-0edcc2f0-83a5-4236-90ae-a97fbf21788f container projected-configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 17:56:06.263: INFO: Waiting for pod pod-projected-configmaps-0edcc2f0-83a5-4236-90ae-a97fbf21788f to disappear
Jun  1 17:56:06.282: INFO: Pod pod-projected-configmaps-0edcc2f0-83a5-4236-90ae-a97fbf21788f no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Jun  1 17:56:06.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7578" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":159,"skipped":2700,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Jun  1 17:56:06.510: INFO: Asynchronously running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 17:56:06.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9817" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":292,"completed":160,"skipped":2704,"failed":0}
SSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  1 17:56:07.000: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:162
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating the pod
Jun  1 17:56:07.104: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
Jun  1 17:56:17.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4946" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":292,"completed":161,"skipped":2711,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  1 17:56:17.695: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name secret-emptykey-test-3b4434b7-ae88-4d1f-8352-dcbd0c82d1db
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
Jun  1 17:56:17.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8555" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":292,"completed":162,"skipped":2759,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 48 lines ...
Jun  1 17:56:28.752: INFO: stderr: ""
Jun  1 17:56:28.752: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 17:56:28.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3677" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":292,"completed":163,"skipped":2779,"failed":0}
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-downwardapi-rxdm
STEP: Creating a pod to test atomic-volume-subpath
Jun  1 17:56:28.993: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-rxdm" in namespace "subpath-4114" to be "Succeeded or Failed"
Jun  1 17:56:29.017: INFO: Pod "pod-subpath-test-downwardapi-rxdm": Phase="Pending", Reason="", readiness=false. Elapsed: 24.314284ms
Jun  1 17:56:31.033: INFO: Pod "pod-subpath-test-downwardapi-rxdm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040586376s
Jun  1 17:56:33.064: INFO: Pod "pod-subpath-test-downwardapi-rxdm": Phase="Running", Reason="", readiness=true. Elapsed: 4.071815985s
Jun  1 17:56:35.092: INFO: Pod "pod-subpath-test-downwardapi-rxdm": Phase="Running", Reason="", readiness=true. Elapsed: 6.098960243s
Jun  1 17:56:37.110: INFO: Pod "pod-subpath-test-downwardapi-rxdm": Phase="Running", Reason="", readiness=true. Elapsed: 8.117097845s
Jun  1 17:56:39.125: INFO: Pod "pod-subpath-test-downwardapi-rxdm": Phase="Running", Reason="", readiness=true. Elapsed: 10.132484766s
... skipping 2 lines ...
Jun  1 17:56:45.169: INFO: Pod "pod-subpath-test-downwardapi-rxdm": Phase="Running", Reason="", readiness=true. Elapsed: 16.176592182s
Jun  1 17:56:47.181: INFO: Pod "pod-subpath-test-downwardapi-rxdm": Phase="Running", Reason="", readiness=true. Elapsed: 18.1883534s
Jun  1 17:56:49.192: INFO: Pod "pod-subpath-test-downwardapi-rxdm": Phase="Running", Reason="", readiness=true. Elapsed: 20.199117894s
Jun  1 17:56:51.213: INFO: Pod "pod-subpath-test-downwardapi-rxdm": Phase="Running", Reason="", readiness=true. Elapsed: 22.220815914s
Jun  1 17:56:53.228: INFO: Pod "pod-subpath-test-downwardapi-rxdm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.235611324s
STEP: Saw pod success
Jun  1 17:56:53.228: INFO: Pod "pod-subpath-test-downwardapi-rxdm" satisfied condition "Succeeded or Failed"
Jun  1 17:56:53.245: INFO: Trying to get logs from node kind-worker pod pod-subpath-test-downwardapi-rxdm container test-container-subpath-downwardapi-rxdm: <nil>
STEP: delete the pod
Jun  1 17:56:53.324: INFO: Waiting for pod pod-subpath-test-downwardapi-rxdm to disappear
Jun  1 17:56:53.341: INFO: Pod pod-subpath-test-downwardapi-rxdm no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-rxdm
Jun  1 17:56:53.341: INFO: Deleting pod "pod-subpath-test-downwardapi-rxdm" in namespace "subpath-4114"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Jun  1 17:56:53.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4114" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":292,"completed":164,"skipped":2785,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 19 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Jun  1 17:57:09.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8222" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":292,"completed":165,"skipped":2834,"failed":0}
SSS
------------------------------
[sig-auth] ServiceAccounts 
  should run through the lifecycle of a ServiceAccount [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 10 lines ...
STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector)
STEP: deleting the ServiceAccount
[AfterEach] [sig-auth] ServiceAccounts
  test/e2e/framework/framework.go:175
Jun  1 17:57:10.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-8775" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":292,"completed":166,"skipped":2837,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 9 lines ...
STEP: Creating the pod
Jun  1 17:57:15.048: INFO: Successfully updated pod "annotationupdatef4ae021d-a857-4bb1-84c9-65681ef72916"
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Jun  1 17:57:19.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2661" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":292,"completed":167,"skipped":2852,"failed":0}
SSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 21 lines ...
Jun  1 17:57:28.478: INFO: Pod "test-cleanup-deployment-6688745694-vh82w" is available:
&Pod{ObjectMeta:{test-cleanup-deployment-6688745694-vh82w test-cleanup-deployment-6688745694- deployment-3640 /api/v1/namespaces/deployment-3640/pods/test-cleanup-deployment-6688745694-vh82w a6b8e11f-468e-4fdb-ad40-77ee7a91076a 20521 0 2020-06-01 17:57:24 +0000 UTC <nil> <nil> map[name:cleanup-pod pod-template-hash:6688745694] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-6688745694 578a01cc-0d1a-4882-8e86-703c3ea8998d 0xc00262d657 0xc00262d658}] []  [{kube-controller-manager Update v1 2020-06-01 17:57:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"578a01cc-0d1a-4882-8e86-703c3ea8998d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-01 17:57:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.100\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lktjr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lktjr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lktjr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 17:57:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 17:57:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 17:57:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 17:57:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.2.100,StartTime:2020-06-01 17:57:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-01 17:57:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://91a7af34f056dc99bd7b56b68f4869d55f399853e5e8972906fd12eda529f31a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.100,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Jun  1 17:57:28.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3640" for this suite.
•{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":292,"completed":168,"skipped":2856,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 29 lines ...
  test/e2e/framework/framework.go:175
Jun  1 17:57:47.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3159" for this suite.
STEP: Destroying namespace "webhook-3159-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":292,"completed":169,"skipped":2868,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 13 lines ...
Jun  1 17:57:48.129: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-2166 /api/v1/namespaces/watch-2166/configmaps/e2e-watch-test-resource-version 6ccf09fa-ff53-49cd-9cc9-9caf36ed9506 20680 0 2020-06-01 17:57:47 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-06-01 17:57:48 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Jun  1 17:57:48.129: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-2166 /api/v1/namespaces/watch-2166/configmaps/e2e-watch-test-resource-version 6ccf09fa-ff53-49cd-9cc9-9caf36ed9506 20681 0 2020-06-01 17:57:47 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-06-01 17:57:48 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Jun  1 17:57:48.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2166" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":292,"completed":170,"skipped":2900,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
Jun  1 17:57:59.800: INFO: stderr: ""
Jun  1 17:57:59.800: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7124-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t<string>\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t<string>\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t<Object>\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t<map[string]>\n     Specification of Waldo\n\n   status\t<Object>\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 17:58:03.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6885" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":292,"completed":171,"skipped":2952,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 12 lines ...
STEP: Creating secret with name s-test-opt-create-0e78f476-be96-4aec-8b10-b0c612d4f8cd
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Jun  1 17:58:12.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1569" for this suite.
•{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":172,"skipped":2966,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 5 lines ...
[BeforeEach] [sig-network] Services
  test/e2e/network/service.go:809
[It] should serve multiport endpoints from pods  [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating service multi-endpoint-test in namespace services-899
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-899 to expose endpoints map[]
Jun  1 17:58:12.484: INFO: Get endpoints failed (25.904144ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jun  1 17:58:13.511: INFO: successfully validated that service multi-endpoint-test in namespace services-899 exposes endpoints map[] (1.05352459s elapsed)
STEP: Creating pod pod1 in namespace services-899
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-899 to expose endpoints map[pod1:[100]]
Jun  1 17:58:17.726: INFO: successfully validated that service multi-endpoint-test in namespace services-899 exposes endpoints map[pod1:[100]] (4.171704698s elapsed)
STEP: Creating pod pod2 in namespace services-899
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-899 to expose endpoints map[pod1:[100] pod2:[101]]
... skipping 7 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 17:58:22.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-899" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":292,"completed":173,"skipped":3005,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 26 lines ...
Jun  1 17:58:26.810: INFO: Pod "test-recreate-deployment-d5667d9c7-7tthr" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-7tthr test-recreate-deployment-d5667d9c7- deployment-4966 /api/v1/namespaces/deployment-4966/pods/test-recreate-deployment-d5667d9c7-7tthr fdfb9f7f-48d8-48af-b586-1baaacf4d1fe 20966 0 2020-06-01 17:58:26 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 08d88fd9-d0c5-4530-a40a-4dc431025403 0xc004c1ad00 0xc004c1ad01}] []  [{kube-controller-manager Update v1 2020-06-01 17:58:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"08d88fd9-d0c5-4530-a40a-4dc431025403\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-01 17:58:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fzjrj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fzjrj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fzjrj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 17:58:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 17:58:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 17:58:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 17:58:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-06-01 17:58:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Jun  1 17:58:26.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4966" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":292,"completed":174,"skipped":3022,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] Pods Extended
... skipping 10 lines ...
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  test/e2e/framework/framework.go:175
Jun  1 17:58:26.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1607" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":292,"completed":175,"skipped":3062,"failed":0}

------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Jun  1 17:58:27.037: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in container's args
Jun  1 17:58:27.156: INFO: Waiting up to 5m0s for pod "var-expansion-ca8f3ae6-a32c-4c1f-8be1-8cef020189e6" in namespace "var-expansion-9720" to be "Succeeded or Failed"
Jun  1 17:58:27.164: INFO: Pod "var-expansion-ca8f3ae6-a32c-4c1f-8be1-8cef020189e6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.164305ms
Jun  1 17:58:29.183: INFO: Pod "var-expansion-ca8f3ae6-a32c-4c1f-8be1-8cef020189e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02688274s
Jun  1 17:58:31.199: INFO: Pod "var-expansion-ca8f3ae6-a32c-4c1f-8be1-8cef020189e6": Phase="Running", Reason="", readiness=true. Elapsed: 4.042603015s
Jun  1 17:58:33.209: INFO: Pod "var-expansion-ca8f3ae6-a32c-4c1f-8be1-8cef020189e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.052458787s
STEP: Saw pod success
Jun  1 17:58:33.209: INFO: Pod "var-expansion-ca8f3ae6-a32c-4c1f-8be1-8cef020189e6" satisfied condition "Succeeded or Failed"
Jun  1 17:58:33.217: INFO: Trying to get logs from node kind-worker2 pod var-expansion-ca8f3ae6-a32c-4c1f-8be1-8cef020189e6 container dapi-container: <nil>
STEP: delete the pod
Jun  1 17:58:33.300: INFO: Waiting for pod var-expansion-ca8f3ae6-a32c-4c1f-8be1-8cef020189e6 to disappear
Jun  1 17:58:33.329: INFO: Pod var-expansion-ca8f3ae6-a32c-4c1f-8be1-8cef020189e6 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Jun  1 17:58:33.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9720" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":292,"completed":176,"skipped":3062,"failed":0}
SSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 10 lines ...
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jun  1 17:58:38.127: INFO: Successfully updated pod "pod-update-activedeadlineseconds-dcad3d88-0fd6-49ef-96bf-f3908933f663"
Jun  1 17:58:38.128: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-dcad3d88-0fd6-49ef-96bf-f3908933f663" in namespace "pods-2128" to be "terminated due to deadline exceeded"
Jun  1 17:58:38.139: INFO: Pod "pod-update-activedeadlineseconds-dcad3d88-0fd6-49ef-96bf-f3908933f663": Phase="Running", Reason="", readiness=true. Elapsed: 11.842746ms
Jun  1 17:58:40.164: INFO: Pod "pod-update-activedeadlineseconds-dcad3d88-0fd6-49ef-96bf-f3908933f663": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.036310408s
Jun  1 17:58:40.164: INFO: Pod "pod-update-activedeadlineseconds-dcad3d88-0fd6-49ef-96bf-f3908933f663" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Jun  1 17:58:40.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2128" for this suite.
•{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":292,"completed":177,"skipped":3067,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 24 lines ...
Jun  1 17:58:46.735: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jun  1 17:58:46.735: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config describe pod agnhost-master-nbvvq --namespace=kubectl-4660'
Jun  1 17:58:47.384: INFO: stderr: ""
Jun  1 17:58:47.384: INFO: stdout: "Name:         agnhost-master-nbvvq\nNamespace:    kubectl-4660\nPriority:     0\nNode:         kind-worker/172.18.0.2\nStart Time:   Mon, 01 Jun 2020 17:58:41 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  <none>\nStatus:       Running\nIP:           10.244.1.190\nIPs:\n  IP:           10.244.1.190\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://a3479530fb6bb33b783c6de7bc2f4a19db00d699ed08172e559686ed65afc5f6\n    Image:          us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n    Image ID:       us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Mon, 01 Jun 2020 17:58:45 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-llpx2 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-llpx2:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-llpx2\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  <none>\nTolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From                  Message\n  ----    ------     ----  ----                  -------\n  Normal  Scheduled  6s    default-scheduler     Successfully assigned kubectl-4660/agnhost-master-nbvvq to kind-worker\n  Normal  Pulled     3s    kubelet, kind-worker  Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\" already present on machine\n  Normal  Created    3s    kubelet, kind-worker  Created container agnhost-master\n  Normal  Started    2s    kubelet, kind-worker  Started container agnhost-master\n"
Jun  1 17:58:47.384: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config describe rc agnhost-master --namespace=kubectl-4660'
Jun  1 17:58:48.156: INFO: stderr: ""
Jun  1 17:58:48.156: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-4660\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  7s    replication-controller  Created pod: agnhost-master-nbvvq\n"
Jun  1 17:58:48.157: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config describe service agnhost-master --namespace=kubectl-4660'
Jun  1 17:58:48.781: INFO: stderr: ""
Jun  1 17:58:48.781: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-4660\nLabels:            app=agnhost\n                   role=master\nAnnotations:       <none>\nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.101.163.163\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.244.1.190:6379\nSession Affinity:  None\nEvents:            <none>\n"
Jun  1 17:58:48.801: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config describe node kind-control-plane'
Jun  1 17:58:49.444: INFO: stderr: ""
Jun  1 17:58:49.444: INFO: stdout: "Name:               kind-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=kind-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Mon, 01 Jun 2020 17:00:35 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  kind-control-plane\n  AcquireTime:     <unset>\n  RenewTime:       Mon, 01 Jun 2020 17:58:45 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Mon, 01 Jun 2020 17:56:54 +0000   Mon, 01 Jun 2020 17:00:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Mon, 01 Jun 2020 17:56:54 +0000   Mon, 01 Jun 2020 17:00:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Mon, 01 Jun 2020 17:56:54 +0000   Mon, 01 Jun 2020 17:00:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Mon, 01 Jun 2020 17:56:54 +0000   Mon, 01 Jun 2020 17:01:48 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.4\n  Hostname:    kind-control-plane\nCapacity:\n  cpu:                8\n  ephemeral-storage:  253882800Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             53582972Ki\n  pods:               110\nAllocatable:\n  cpu:                8\n  ephemeral-storage:  253882800Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             53582972Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 56f92320acbe43eb9ed857b1f979cb8c\n  System UUID:                665d0d83-28e1-499f-ab6f-2e232ea5eab5\n  Boot ID:                    d70f8384-c83b-489b-b631-b5ce22bc7a85\n  Kernel Version:             4.15.0-1044-gke\n  OS Image:                   Ubuntu 20.04 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.3.4-12-g1e902b2d\n  Kubelet Version:            v1.19.0-beta.0.320+3fc7831cd8a704\n  Kube-Proxy Version:         v1.19.0-beta.0.320+3fc7831cd8a704\nPodCIDR:                      10.244.0.0/24\nPodCIDRs:                     10.244.0.0/24\nNon-terminated Pods:          (8 in total)\n  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---\n  kube-system                 coredns-66bff467f8-zxdjt                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     50m\n  kube-system                 etcd-kind-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         58m\n  kube-system                 kindnet-p2prq                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      57m\n  kube-system                 kube-apiserver-kind-control-plane             250m (3%)     0 (0%)      0 (0%)           0 (0%)         58m\n  kube-system                 kube-controller-manager-kind-control-plane    200m (2%)     0 (0%)      0 (0%)           0 (0%)         58m\n  kube-system                 kube-proxy-ks4gj                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         57m\n  kube-system                 kube-scheduler-kind-control-plane             100m (1%)     0 (0%)      0 (0%)           0 (0%)         58m\n  local-path-storage          local-path-provisioner-bd4bb6b75-t8zz8        0 (0%)        0 (0%)      0 (0%)           0 (0%)         57m\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                750m (9%)   100m (1%)\n  memory             120Mi (0%)  220Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\n  hugepages-1Gi      0 (0%)      0 (0%)\n  hugepages-2Mi      0 (0%)      0 (0%)\nEvents:\n  Type     Reason                    Age                From                            Message\n  ----     ------                    ----               ----                            -------\n  Normal   Starting                  58m                kubelet, kind-control-plane     Starting kubelet.\n  Normal   NodeHasSufficientMemory   58m (x6 over 58m)  kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasSufficientMemory\n  Normal   NodeHasNoDiskPressure     58m (x6 over 58m)  kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID      58m (x5 over 58m)  kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasSufficientPID\n  Warning  CheckLimitsForResolvConf  58m                kubelet, kind-control-plane     Resolv.conf file '/etc/resolv.conf' contains search line consisting of more than 3 domains!\n  Normal   NodeAllocatableEnforced   58m                kubelet, kind-control-plane     Updated Node Allocatable limit across pods\n  Normal   NodeHasSufficientMemory   58m                kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasSufficientMemory\n  Normal   Starting                  58m                kubelet, kind-control-plane     Starting kubelet.\n  Normal   NodeHasNoDiskPressure     58m                kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID      58m                kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasSufficientPID\n  Warning  CheckLimitsForResolvConf  58m                kubelet, kind-control-plane     Resolv.conf file '/etc/resolv.conf' contains search line consisting of more than 3 domains!\n  Normal   NodeAllocatableEnforced   58m                kubelet, kind-control-plane     Updated Node Allocatable limit across pods\n  Normal   Starting                  57m                kube-proxy, kind-control-plane  Starting kube-proxy.\n  Normal   NodeReady                 57m                kubelet, kind-control-plane     Node kind-control-plane status is now: NodeReady\n"
Jun  1 17:58:49.444: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config describe namespace kubectl-4660'
Jun  1 17:58:50.093: INFO: stderr: ""
Jun  1 17:58:50.093: INFO: stdout: "Name:         kubectl-4660\nLabels:       e2e-framework=kubectl\n              e2e-run=433bb595-0e10-407e-83bc-d28099d99a1e\nAnnotations:  <none>\nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 17:58:50.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4660" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":292,"completed":178,"skipped":3068,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 8 lines ...
Jun  1 17:58:50.242: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jun  1 17:58:54.113: INFO: >>> kubeConfig: /root/.kube/kind-test-config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 17:59:09.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7854" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":292,"completed":179,"skipped":3095,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Jun  1 17:59:21.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9611" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":292,"completed":180,"skipped":3100,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Job
... skipping 12 lines ...
Jun  1 17:59:25.653: INFO: Terminating Job.batch foo pods took: 302.658832ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:175
Jun  1 18:00:11.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-4722" for this suite.
•{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":292,"completed":181,"skipped":3159,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 18:00:11.564: INFO: Waiting up to 5m0s for pod "downwardapi-volume-010255f7-fbe1-4e3b-a9a3-25aaac5bc0ae" in namespace "downward-api-2898" to be "Succeeded or Failed"
Jun  1 18:00:11.582: INFO: Pod "downwardapi-volume-010255f7-fbe1-4e3b-a9a3-25aaac5bc0ae": Phase="Pending", Reason="", readiness=false. Elapsed: 15.834299ms
Jun  1 18:00:13.594: INFO: Pod "downwardapi-volume-010255f7-fbe1-4e3b-a9a3-25aaac5bc0ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028236401s
Jun  1 18:00:15.606: INFO: Pod "downwardapi-volume-010255f7-fbe1-4e3b-a9a3-25aaac5bc0ae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040585711s
Jun  1 18:00:17.639: INFO: Pod "downwardapi-volume-010255f7-fbe1-4e3b-a9a3-25aaac5bc0ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.073586034s
STEP: Saw pod success
Jun  1 18:00:17.639: INFO: Pod "downwardapi-volume-010255f7-fbe1-4e3b-a9a3-25aaac5bc0ae" satisfied condition "Succeeded or Failed"
Jun  1 18:00:17.655: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-010255f7-fbe1-4e3b-a9a3-25aaac5bc0ae container client-container: <nil>
STEP: delete the pod
Jun  1 18:00:17.742: INFO: Waiting for pod downwardapi-volume-010255f7-fbe1-4e3b-a9a3-25aaac5bc0ae to disappear
Jun  1 18:00:17.756: INFO: Pod downwardapi-volume-010255f7-fbe1-4e3b-a9a3-25aaac5bc0ae no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Jun  1 18:00:17.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2898" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":292,"completed":182,"skipped":3183,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 28 lines ...
Jun  1 18:01:39.056: INFO: Terminating ReplicationController wrapped-volume-race-6994f277-f0e9-4d93-bda4-d96325eb49c9 pods took: 402.68062ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  test/e2e/framework/framework.go:175
Jun  1 18:01:52.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-42" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":292,"completed":183,"skipped":3185,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 8 lines ...
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  test/e2e/framework/framework.go:175
Jun  1 18:01:59.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-7718" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":292,"completed":184,"skipped":3187,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
Jun  1 18:02:12.837: INFO: stderr: ""
Jun  1 18:02:12.838: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-3559-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     <empty>\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 18:02:16.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1343" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":292,"completed":185,"skipped":3205,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
Jun  1 18:02:16.825: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test override all
Jun  1 18:02:16.958: INFO: Waiting up to 5m0s for pod "client-containers-d49466f9-618d-4804-9d56-ab2ae2a5fd62" in namespace "containers-7285" to be "Succeeded or Failed"
Jun  1 18:02:16.973: INFO: Pod "client-containers-d49466f9-618d-4804-9d56-ab2ae2a5fd62": Phase="Pending", Reason="", readiness=false. Elapsed: 12.763304ms
Jun  1 18:02:19.013: INFO: Pod "client-containers-d49466f9-618d-4804-9d56-ab2ae2a5fd62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052888438s
Jun  1 18:02:21.056: INFO: Pod "client-containers-d49466f9-618d-4804-9d56-ab2ae2a5fd62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095940518s
STEP: Saw pod success
Jun  1 18:02:21.056: INFO: Pod "client-containers-d49466f9-618d-4804-9d56-ab2ae2a5fd62" satisfied condition "Succeeded or Failed"
Jun  1 18:02:21.076: INFO: Trying to get logs from node kind-worker pod client-containers-d49466f9-618d-4804-9d56-ab2ae2a5fd62 container test-container: <nil>
STEP: delete the pod
Jun  1 18:02:21.262: INFO: Waiting for pod client-containers-d49466f9-618d-4804-9d56-ab2ae2a5fd62 to disappear
Jun  1 18:02:21.276: INFO: Pod client-containers-d49466f9-618d-4804-9d56-ab2ae2a5fd62 no longer exists
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
Jun  1 18:02:21.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7285" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":292,"completed":186,"skipped":3223,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name projected-secret-test-5e211cdb-6fc7-4ef6-a197-f1417e651525
STEP: Creating a pod to test consume secrets
Jun  1 18:02:21.499: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ef894692-51f0-42ff-8cce-22a98b4e3675" in namespace "projected-2636" to be "Succeeded or Failed"
Jun  1 18:02:21.516: INFO: Pod "pod-projected-secrets-ef894692-51f0-42ff-8cce-22a98b4e3675": Phase="Pending", Reason="", readiness=false. Elapsed: 16.991174ms
Jun  1 18:02:23.528: INFO: Pod "pod-projected-secrets-ef894692-51f0-42ff-8cce-22a98b4e3675": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028569766s
Jun  1 18:02:25.565: INFO: Pod "pod-projected-secrets-ef894692-51f0-42ff-8cce-22a98b4e3675": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065620508s
STEP: Saw pod success
Jun  1 18:02:25.565: INFO: Pod "pod-projected-secrets-ef894692-51f0-42ff-8cce-22a98b4e3675" satisfied condition "Succeeded or Failed"
Jun  1 18:02:25.578: INFO: Trying to get logs from node kind-worker pod pod-projected-secrets-ef894692-51f0-42ff-8cce-22a98b4e3675 container secret-volume-test: <nil>
STEP: delete the pod
Jun  1 18:02:25.637: INFO: Waiting for pod pod-projected-secrets-ef894692-51f0-42ff-8cce-22a98b4e3675 to disappear
Jun  1 18:02:25.642: INFO: Pod pod-projected-secrets-ef894692-51f0-42ff-8cce-22a98b4e3675 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Jun  1 18:02:25.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2636" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":292,"completed":187,"skipped":3238,"failed":0}
SSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 10 lines ...
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Jun  1 18:02:30.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2971" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":292,"completed":188,"skipped":3243,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 18:02:30.489: INFO: Waiting up to 5m0s for pod "downwardapi-volume-42b307d1-d5bd-4fc7-b5f5-52f085086e32" in namespace "projected-657" to be "Succeeded or Failed"
Jun  1 18:02:30.506: INFO: Pod "downwardapi-volume-42b307d1-d5bd-4fc7-b5f5-52f085086e32": Phase="Pending", Reason="", readiness=false. Elapsed: 16.79729ms
Jun  1 18:02:32.520: INFO: Pod "downwardapi-volume-42b307d1-d5bd-4fc7-b5f5-52f085086e32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030904109s
Jun  1 18:02:34.558: INFO: Pod "downwardapi-volume-42b307d1-d5bd-4fc7-b5f5-52f085086e32": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068933842s
Jun  1 18:02:36.581: INFO: Pod "downwardapi-volume-42b307d1-d5bd-4fc7-b5f5-52f085086e32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.091953155s
STEP: Saw pod success
Jun  1 18:02:36.581: INFO: Pod "downwardapi-volume-42b307d1-d5bd-4fc7-b5f5-52f085086e32" satisfied condition "Succeeded or Failed"
Jun  1 18:02:36.594: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-42b307d1-d5bd-4fc7-b5f5-52f085086e32 container client-container: <nil>
STEP: delete the pod
Jun  1 18:02:36.680: INFO: Waiting for pod downwardapi-volume-42b307d1-d5bd-4fc7-b5f5-52f085086e32 to disappear
Jun  1 18:02:36.702: INFO: Pod downwardapi-volume-42b307d1-d5bd-4fc7-b5f5-52f085086e32 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Jun  1 18:02:36.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-657" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":189,"skipped":3265,"failed":0}
S
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 33 lines ...

W0601 18:02:46.936063   12365 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Jun  1 18:02:46.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9122" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":292,"completed":190,"skipped":3266,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-map-d741d73b-40c3-4f77-8c0b-c2d7164e170d
STEP: Creating a pod to test consume secrets
Jun  1 18:02:47.144: INFO: Waiting up to 5m0s for pod "pod-secrets-566bfa3b-f66a-4a16-a7ce-1fd5c3c4cdd8" in namespace "secrets-6088" to be "Succeeded or Failed"
Jun  1 18:02:47.148: INFO: Pod "pod-secrets-566bfa3b-f66a-4a16-a7ce-1fd5c3c4cdd8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.514894ms
Jun  1 18:02:49.164: INFO: Pod "pod-secrets-566bfa3b-f66a-4a16-a7ce-1fd5c3c4cdd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020511709s
Jun  1 18:02:51.183: INFO: Pod "pod-secrets-566bfa3b-f66a-4a16-a7ce-1fd5c3c4cdd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039241064s
STEP: Saw pod success
Jun  1 18:02:51.183: INFO: Pod "pod-secrets-566bfa3b-f66a-4a16-a7ce-1fd5c3c4cdd8" satisfied condition "Succeeded or Failed"
Jun  1 18:02:51.192: INFO: Trying to get logs from node kind-worker pod pod-secrets-566bfa3b-f66a-4a16-a7ce-1fd5c3c4cdd8 container secret-volume-test: <nil>
STEP: delete the pod
Jun  1 18:02:51.281: INFO: Waiting for pod pod-secrets-566bfa3b-f66a-4a16-a7ce-1fd5c3c4cdd8 to disappear
Jun  1 18:02:51.293: INFO: Pod pod-secrets-566bfa3b-f66a-4a16-a7ce-1fd5c3c4cdd8 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Jun  1 18:02:51.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6088" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":191,"skipped":3273,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  1 18:02:51.337: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:162
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating the pod
Jun  1 18:02:51.438: INFO: PodSpec: initContainers in spec.initContainers
Jun  1 18:03:50.839: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-baf80373-a66c-437c-b38f-e6738dac9441", GenerateName:"", Namespace:"init-container-1026", SelfLink:"/api/v1/namespaces/init-container-1026/pods/pod-init-baf80373-a66c-437c-b38f-e6738dac9441", UID:"52dd7bf2-c019-4f81-9f0a-4433ba0a7b36", ResourceVersion:"23281", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63726631371, loc:(*time.Location)(0x8006d20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"438002318"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0043cfea0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0043cfec0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0043cfee0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0043cff00)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-hzjkr", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00443a300), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hzjkr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hzjkr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hzjkr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc004886378), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kind-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00113dc00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004886400)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004886440)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc004886448), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00488644c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726631371, loc:(*time.Location)(0x8006d20)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726631371, loc:(*time.Location)(0x8006d20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726631371, loc:(*time.Location)(0x8006d20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726631371, loc:(*time.Location)(0x8006d20)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.3", PodIP:"10.244.2.118", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.118"}}, StartTime:(*v1.Time)(0xc0043cff20), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00113de30)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00113dea0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://8a2c3aabbd0b2869a97254338616d280ce301e524cb23590551336b8ebe1d867", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0043cff60), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0043cff40), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc00488650f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
Jun  1 18:03:50.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1026" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":292,"completed":192,"skipped":3300,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 8 lines ...
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Jun  1 18:03:58.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8069" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":292,"completed":193,"skipped":3308,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 26 lines ...
  test/e2e/framework/framework.go:175
Jun  1 18:04:06.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1026" for this suite.
STEP: Destroying namespace "webhook-1026-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":292,"completed":194,"skipped":3351,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 44 lines ...
Jun  1 18:04:47.507: INFO: Deleting pod "simpletest.rc-tkpch" in namespace "gc-1618"
Jun  1 18:04:47.614: INFO: Deleting pod "simpletest.rc-tvfdb" in namespace "gc-1618"
[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Jun  1 18:04:47.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1618" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":292,"completed":195,"skipped":3370,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 10 lines ...
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 18:05:09.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1173" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":292,"completed":196,"skipped":3390,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 19 lines ...
Jun  1 18:05:21.151: INFO: stderr: ""
Jun  1 18:05:21.151: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 18:05:21.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-200" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":292,"completed":197,"skipped":3428,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Jun  1 18:05:32.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2287" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":292,"completed":198,"skipped":3449,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 7 lines ...
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Jun  1 18:06:32.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6451" for this suite.
•{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":292,"completed":199,"skipped":3459,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 18:06:32.726: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on node default medium
Jun  1 18:06:32.849: INFO: Waiting up to 5m0s for pod "pod-a0612698-545d-4492-a6ff-b31fca203c4d" in namespace "emptydir-2439" to be "Succeeded or Failed"
Jun  1 18:06:32.856: INFO: Pod "pod-a0612698-545d-4492-a6ff-b31fca203c4d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.144223ms
Jun  1 18:06:34.870: INFO: Pod "pod-a0612698-545d-4492-a6ff-b31fca203c4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020879615s
Jun  1 18:06:36.887: INFO: Pod "pod-a0612698-545d-4492-a6ff-b31fca203c4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038385434s
STEP: Saw pod success
Jun  1 18:06:36.887: INFO: Pod "pod-a0612698-545d-4492-a6ff-b31fca203c4d" satisfied condition "Succeeded or Failed"
Jun  1 18:06:36.905: INFO: Trying to get logs from node kind-worker2 pod pod-a0612698-545d-4492-a6ff-b31fca203c4d container test-container: <nil>
STEP: delete the pod
Jun  1 18:06:37.017: INFO: Waiting for pod pod-a0612698-545d-4492-a6ff-b31fca203c4d to disappear
Jun  1 18:06:37.028: INFO: Pod pod-a0612698-545d-4492-a6ff-b31fca203c4d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 18:06:37.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2439" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":200,"skipped":3469,"failed":0}

------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  test/e2e/framework/framework.go:175
Jun  1 18:06:45.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7049" for this suite.
STEP: Destroying namespace "webhook-7049-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":292,"completed":201,"skipped":3469,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 9 lines ...
[It] should have an terminated reason [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Jun  1 18:06:54.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8226" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":292,"completed":202,"skipped":3485,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 12 lines ...
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 18:06:54.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9900" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":292,"completed":203,"skipped":3496,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 9 lines ...
STEP: creating pod
Jun  1 18:06:58.704: INFO: Pod pod-hostip-c00f4464-b48e-4845-98e6-697fbbc001a3 has hostIP: 172.18.0.2
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Jun  1 18:06:58.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9139" for this suite.
•{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":292,"completed":204,"skipped":3512,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
Jun  1 18:06:58.752: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test override command
Jun  1 18:06:58.906: INFO: Waiting up to 5m0s for pod "client-containers-c2824377-acd6-4d64-90a7-668600534a08" in namespace "containers-7661" to be "Succeeded or Failed"
Jun  1 18:06:58.941: INFO: Pod "client-containers-c2824377-acd6-4d64-90a7-668600534a08": Phase="Pending", Reason="", readiness=false. Elapsed: 34.556288ms
Jun  1 18:07:00.953: INFO: Pod "client-containers-c2824377-acd6-4d64-90a7-668600534a08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047298253s
Jun  1 18:07:02.969: INFO: Pod "client-containers-c2824377-acd6-4d64-90a7-668600534a08": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062860786s
Jun  1 18:07:04.977: INFO: Pod "client-containers-c2824377-acd6-4d64-90a7-668600534a08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.070965323s
STEP: Saw pod success
Jun  1 18:07:04.977: INFO: Pod "client-containers-c2824377-acd6-4d64-90a7-668600534a08" satisfied condition "Succeeded or Failed"
Jun  1 18:07:04.988: INFO: Trying to get logs from node kind-worker2 pod client-containers-c2824377-acd6-4d64-90a7-668600534a08 container test-container: <nil>
STEP: delete the pod
Jun  1 18:07:05.101: INFO: Waiting for pod client-containers-c2824377-acd6-4d64-90a7-668600534a08 to disappear
Jun  1 18:07:05.117: INFO: Pod client-containers-c2824377-acd6-4d64-90a7-668600534a08 no longer exists
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
Jun  1 18:07:05.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7661" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":292,"completed":205,"skipped":3528,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 26 lines ...
Jun  1 18:07:55.456: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-5684 /api/v1/namespaces/watch-5684/configmaps/e2e-watch-test-configmap-b afb68130-2b9b-4cc8-a3be-5b7c2df37c7d 24570 0 2020-06-01 18:07:45 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-06-01 18:07:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Jun  1 18:07:55.457: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-5684 /api/v1/namespaces/watch-5684/configmaps/e2e-watch-test-configmap-b afb68130-2b9b-4cc8-a3be-5b7c2df37c7d 24570 0 2020-06-01 18:07:45 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-06-01 18:07:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Jun  1 18:08:05.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5684" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":292,"completed":206,"skipped":3533,"failed":0}
SSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 3 lines ...
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  test/e2e/common/pods.go:179
[It] should contain environment variables for services [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Jun  1 18:08:09.751: INFO: Waiting up to 5m0s for pod "client-envvars-ac27cc4f-95bc-4c47-9e64-3026d935c6e6" in namespace "pods-5100" to be "Succeeded or Failed"
Jun  1 18:08:09.777: INFO: Pod "client-envvars-ac27cc4f-95bc-4c47-9e64-3026d935c6e6": Phase="Pending", Reason="", readiness=false. Elapsed: 25.694569ms
Jun  1 18:08:11.804: INFO: Pod "client-envvars-ac27cc4f-95bc-4c47-9e64-3026d935c6e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052787709s
Jun  1 18:08:13.816: INFO: Pod "client-envvars-ac27cc4f-95bc-4c47-9e64-3026d935c6e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064625902s
STEP: Saw pod success
Jun  1 18:08:13.817: INFO: Pod "client-envvars-ac27cc4f-95bc-4c47-9e64-3026d935c6e6" satisfied condition "Succeeded or Failed"
Jun  1 18:08:13.829: INFO: Trying to get logs from node kind-worker pod client-envvars-ac27cc4f-95bc-4c47-9e64-3026d935c6e6 container env3cont: <nil>
STEP: delete the pod
Jun  1 18:08:13.952: INFO: Waiting for pod client-envvars-ac27cc4f-95bc-4c47-9e64-3026d935c6e6 to disappear
Jun  1 18:08:13.958: INFO: Pod client-envvars-ac27cc4f-95bc-4c47-9e64-3026d935c6e6 no longer exists
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Jun  1 18:08:13.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5100" for this suite.
•{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":292,"completed":207,"skipped":3537,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 36 lines ...

W0601 18:08:14.882007   12365 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Jun  1 18:08:14.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2482" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":292,"completed":208,"skipped":3575,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-68d1ac02-9580-4be0-b112-975d84b08300
STEP: Creating a pod to test consume secrets
Jun  1 18:08:15.342: INFO: Waiting up to 5m0s for pod "pod-secrets-3a9c77b4-1bad-4f2a-b540-fb4acabac573" in namespace "secrets-1011" to be "Succeeded or Failed"
Jun  1 18:08:15.356: INFO: Pod "pod-secrets-3a9c77b4-1bad-4f2a-b540-fb4acabac573": Phase="Pending", Reason="", readiness=false. Elapsed: 12.935715ms
Jun  1 18:08:17.378: INFO: Pod "pod-secrets-3a9c77b4-1bad-4f2a-b540-fb4acabac573": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034532306s
Jun  1 18:08:19.398: INFO: Pod "pod-secrets-3a9c77b4-1bad-4f2a-b540-fb4acabac573": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054408689s
Jun  1 18:08:21.420: INFO: Pod "pod-secrets-3a9c77b4-1bad-4f2a-b540-fb4acabac573": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.076953432s
STEP: Saw pod success
Jun  1 18:08:21.420: INFO: Pod "pod-secrets-3a9c77b4-1bad-4f2a-b540-fb4acabac573" satisfied condition "Succeeded or Failed"
Jun  1 18:08:21.433: INFO: Trying to get logs from node kind-worker2 pod pod-secrets-3a9c77b4-1bad-4f2a-b540-fb4acabac573 container secret-volume-test: <nil>
STEP: delete the pod
Jun  1 18:08:21.541: INFO: Waiting for pod pod-secrets-3a9c77b4-1bad-4f2a-b540-fb4acabac573 to disappear
Jun  1 18:08:21.553: INFO: Pod pod-secrets-3a9c77b4-1bad-4f2a-b540-fb4acabac573 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Jun  1 18:08:21.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1011" for this suite.
STEP: Destroying namespace "secret-namespace-5482" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":292,"completed":209,"skipped":3591,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 25 lines ...
Jun  1 18:08:41.969: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jun  1 18:08:41.985: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
Jun  1 18:08:41.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-711" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":292,"completed":210,"skipped":3665,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 12 lines ...
STEP: Creating configMap with name cm-test-opt-create-d890c211-f546-4511-8d15-f037a40498a8
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 18:08:50.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-968" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":211,"skipped":3676,"failed":0}
SSSS
------------------------------
[k8s.io] Variable Expansion 
  should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 35 lines ...
Jun  1 18:11:10.953: INFO: Deleting pod "var-expansion-0a6da41f-bc43-49c2-8cec-dec72a2d3abe" in namespace "var-expansion-9422"
Jun  1 18:11:11.017: INFO: Wait up to 5m0s for pod "var-expansion-0a6da41f-bc43-49c2-8cec-dec72a2d3abe" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Jun  1 18:11:53.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9422" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]","total":292,"completed":212,"skipped":3680,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 40 lines ...
• [SLOW TEST:308.566 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":292,"completed":213,"skipped":3713,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 22 lines ...
Jun  1 18:17:12.006: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-6494 /api/v1/namespaces/watch-6494/configmaps/e2e-watch-test-label-changed 254bf30b-9d0a-4ff4-a132-59e380664af8 26500 0 2020-06-01 18:17:01 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-06-01 18:17:11 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
Jun  1 18:17:12.006: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-6494 /api/v1/namespaces/watch-6494/configmaps/e2e-watch-test-label-changed 254bf30b-9d0a-4ff4-a132-59e380664af8 26501 0 2020-06-01 18:17:01 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-06-01 18:17:11 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Jun  1 18:17:12.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6494" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":292,"completed":214,"skipped":3720,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 11 lines ...
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 18:17:34.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-277" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":292,"completed":215,"skipped":3728,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Jun  1 18:17:34.097: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Jun  1 18:17:34.229: INFO: Waiting up to 5m0s for pod "downward-api-51d80203-ac1d-4917-bf2c-1b5440c8fd3b" in namespace "downward-api-2514" to be "Succeeded or Failed"
Jun  1 18:17:34.249: INFO: Pod "downward-api-51d80203-ac1d-4917-bf2c-1b5440c8fd3b": Phase="Pending", Reason="", readiness=false. Elapsed: 17.110434ms
Jun  1 18:17:36.253: INFO: Pod "downward-api-51d80203-ac1d-4917-bf2c-1b5440c8fd3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021874104s
Jun  1 18:17:38.265: INFO: Pod "downward-api-51d80203-ac1d-4917-bf2c-1b5440c8fd3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033691485s
STEP: Saw pod success
Jun  1 18:17:38.266: INFO: Pod "downward-api-51d80203-ac1d-4917-bf2c-1b5440c8fd3b" satisfied condition "Succeeded or Failed"
Jun  1 18:17:38.273: INFO: Trying to get logs from node kind-worker pod downward-api-51d80203-ac1d-4917-bf2c-1b5440c8fd3b container dapi-container: <nil>
STEP: delete the pod
Jun  1 18:17:38.366: INFO: Waiting for pod downward-api-51d80203-ac1d-4917-bf2c-1b5440c8fd3b to disappear
Jun  1 18:17:38.384: INFO: Pod downward-api-51d80203-ac1d-4917-bf2c-1b5440c8fd3b no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Jun  1 18:17:38.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2514" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":292,"completed":216,"skipped":3761,"failed":0}
SS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 14 lines ...
STEP: verifying the updated pod is in kubernetes
Jun  1 18:17:43.166: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Jun  1 18:17:43.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9965" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":292,"completed":217,"skipped":3763,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 9 lines ...
STEP: Creating the pod
Jun  1 18:17:47.997: INFO: Successfully updated pod "labelsupdatea892556d-9fbf-48de-82e9-563b8d2a6a5a"
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Jun  1 18:17:50.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9078" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":292,"completed":218,"skipped":3796,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 19 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Jun  1 18:18:06.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5593" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":292,"completed":219,"skipped":3803,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
Jun  1 18:18:10.760: INFO: Initial restart count of pod test-webserver-d4861e20-42a4-4c89-b5b0-61e2acd65d10 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Jun  1 18:22:12.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4582" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":292,"completed":220,"skipped":3811,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 52 lines ...
Jun  1 18:22:31.120: INFO: stderr: ""
Jun  1 18:22:31.120: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 18:22:31.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2822" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":292,"completed":221,"skipped":3828,"failed":0}
SSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
STEP: Updating configmap configmap-test-upd-251df83d-f2ad-4acd-8a00-07bae9fb7bfe
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 18:22:37.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3799" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":222,"skipped":3833,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 73 lines ...
Jun  1 18:23:11.370: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7228/pods","resourceVersion":"27820"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Jun  1 18:23:11.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7228" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":292,"completed":223,"skipped":3896,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Jun  1 18:23:27.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1059" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":292,"completed":224,"skipped":3920,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 30 lines ...
Jun  1 18:23:34.240: INFO: Selector matched 1 pods for map[app:agnhost]
Jun  1 18:23:34.240: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 18:23:34.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4079" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":292,"completed":225,"skipped":3923,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Jun  1 18:23:34.392: INFO: Waiting up to 5m0s for pod "busybox-user-65534-ab1232ac-a1cc-4fb3-b521-f7e49378a643" in namespace "security-context-test-1407" to be "Succeeded or Failed"
Jun  1 18:23:34.406: INFO: Pod "busybox-user-65534-ab1232ac-a1cc-4fb3-b521-f7e49378a643": Phase="Pending", Reason="", readiness=false. Elapsed: 13.876693ms
Jun  1 18:23:36.423: INFO: Pod "busybox-user-65534-ab1232ac-a1cc-4fb3-b521-f7e49378a643": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031778159s
Jun  1 18:23:38.440: INFO: Pod "busybox-user-65534-ab1232ac-a1cc-4fb3-b521-f7e49378a643": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048543608s
Jun  1 18:23:38.440: INFO: Pod "busybox-user-65534-ab1232ac-a1cc-4fb3-b521-f7e49378a643" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Jun  1 18:23:38.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1407" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":226,"skipped":3932,"failed":0}

------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 18:23:44.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8362" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":227,"skipped":3932,"failed":0}
SS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Job
... skipping 21 lines ...
Jun  1 18:23:53.994: INFO: Pod "adopt-release-r4vg2": Phase="Running", Reason="", readiness=true. Elapsed: 2.02869078s
Jun  1 18:23:53.994: INFO: Pod "adopt-release-r4vg2" satisfied condition "released"
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:175
Jun  1 18:23:53.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-7938" for this suite.
•{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":292,"completed":228,"skipped":3934,"failed":0}
SSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 19 lines ...
Jun  1 18:24:10.298: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
Jun  1 18:24:10.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-361" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":292,"completed":229,"skipped":3938,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 60 lines ...
Jun  1 18:24:23.278: INFO: stderr: ""
Jun  1 18:24:23.278: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 18:24:23.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5167" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":292,"completed":230,"skipped":3941,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Jun  1 18:24:29.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1296" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":292,"completed":231,"skipped":3969,"failed":0}

------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 18:24:29.966: INFO: Waiting up to 5m0s for pod "downwardapi-volume-de8f9f3d-e98e-4a92-bba7-be25490ef88a" in namespace "downward-api-3483" to be "Succeeded or Failed"
Jun  1 18:24:29.980: INFO: Pod "downwardapi-volume-de8f9f3d-e98e-4a92-bba7-be25490ef88a": Phase="Pending", Reason="", readiness=false. Elapsed: 13.992712ms
Jun  1 18:24:31.986: INFO: Pod "downwardapi-volume-de8f9f3d-e98e-4a92-bba7-be25490ef88a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020020065s
Jun  1 18:24:34.005: INFO: Pod "downwardapi-volume-de8f9f3d-e98e-4a92-bba7-be25490ef88a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039501653s
STEP: Saw pod success
Jun  1 18:24:34.005: INFO: Pod "downwardapi-volume-de8f9f3d-e98e-4a92-bba7-be25490ef88a" satisfied condition "Succeeded or Failed"
Jun  1 18:24:34.016: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-de8f9f3d-e98e-4a92-bba7-be25490ef88a container client-container: <nil>
STEP: delete the pod
Jun  1 18:24:34.078: INFO: Waiting for pod downwardapi-volume-de8f9f3d-e98e-4a92-bba7-be25490ef88a to disappear
Jun  1 18:24:34.083: INFO: Pod downwardapi-volume-de8f9f3d-e98e-4a92-bba7-be25490ef88a no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Jun  1 18:24:34.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3483" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":232,"skipped":3969,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Jun  1 18:24:34.258: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-108c3309-fbe1-4a29-967b-fb97ee74ce5b" in namespace "security-context-test-9362" to be "Succeeded or Failed"
Jun  1 18:24:34.269: INFO: Pod "busybox-privileged-false-108c3309-fbe1-4a29-967b-fb97ee74ce5b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.398719ms
Jun  1 18:24:36.281: INFO: Pod "busybox-privileged-false-108c3309-fbe1-4a29-967b-fb97ee74ce5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023703687s
Jun  1 18:24:38.293: INFO: Pod "busybox-privileged-false-108c3309-fbe1-4a29-967b-fb97ee74ce5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035357775s
Jun  1 18:24:38.293: INFO: Pod "busybox-privileged-false-108c3309-fbe1-4a29-967b-fb97ee74ce5b" satisfied condition "Succeeded or Failed"
Jun  1 18:24:38.320: INFO: Got logs for pod "busybox-privileged-false-108c3309-fbe1-4a29-967b-fb97ee74ce5b": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Jun  1 18:24:38.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9362" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":233,"skipped":3980,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 10 lines ...
Jun  1 18:24:54.114: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jun  1 18:24:58.439: INFO: >>> kubeConfig: /root/.kube/kind-test-config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 18:25:13.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9448" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":292,"completed":234,"skipped":3986,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Jun  1 18:25:17.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9174" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":235,"skipped":4006,"failed":0}

------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 9 lines ...
STEP: Creating the pod
Jun  1 18:25:22.741: INFO: Successfully updated pod "annotationupdatedb7ba5d5-764c-4fb3-8718-35d3674e7985"
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Jun  1 18:25:26.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4404" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":292,"completed":236,"skipped":4006,"failed":0}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  test/e2e/framework/framework.go:175
Jun  1 18:25:34.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-904" for this suite.
STEP: Destroying namespace "webhook-904-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":292,"completed":237,"skipped":4010,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 76 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 18:26:41.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1666" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":292,"completed":238,"skipped":4027,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 12 lines ...
Jun  1 18:26:46.589: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
Jun  1 18:26:46.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8271" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":292,"completed":239,"skipped":4035,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Jun  1 18:26:46.748: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Jun  1 18:26:46.885: INFO: Waiting up to 5m0s for pod "downward-api-964c5204-a017-4b25-9f30-0d5a8677503b" in namespace "downward-api-1862" to be "Succeeded or Failed"
Jun  1 18:26:46.898: INFO: Pod "downward-api-964c5204-a017-4b25-9f30-0d5a8677503b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.979969ms
Jun  1 18:26:48.937: INFO: Pod "downward-api-964c5204-a017-4b25-9f30-0d5a8677503b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051396914s
Jun  1 18:26:50.956: INFO: Pod "downward-api-964c5204-a017-4b25-9f30-0d5a8677503b": Phase="Running", Reason="", readiness=true. Elapsed: 4.070010526s
Jun  1 18:26:52.965: INFO: Pod "downward-api-964c5204-a017-4b25-9f30-0d5a8677503b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.079627938s
STEP: Saw pod success
Jun  1 18:26:52.966: INFO: Pod "downward-api-964c5204-a017-4b25-9f30-0d5a8677503b" satisfied condition "Succeeded or Failed"
Jun  1 18:26:52.978: INFO: Trying to get logs from node kind-worker pod downward-api-964c5204-a017-4b25-9f30-0d5a8677503b container dapi-container: <nil>
STEP: delete the pod
Jun  1 18:26:53.049: INFO: Waiting for pod downward-api-964c5204-a017-4b25-9f30-0d5a8677503b to disappear
Jun  1 18:26:53.064: INFO: Pod downward-api-964c5204-a017-4b25-9f30-0d5a8677503b no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Jun  1 18:26:53.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1862" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":292,"completed":240,"skipped":4067,"failed":0}

------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 85 lines ...
Jun  1 18:28:01.550: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Jun  1 18:28:01.550: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jun  1 18:28:01.550: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jun  1 18:28:01.551: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 18:28:02.536: INFO: rc: 1
Jun  1 18:28:02.536: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: Internal error occurred: error executing command in container: failed to exec in container: failed to create exec "68758abaa0be45e5d126b67ee3b3c9dce8aec8cec01a7d407d4c740a2bce9227": task 4408f40bd09d50ce1dcb7ad60a972c7577f11cdc4e9e2521b331e6ab66147d20 not found: not found

error:
exit status 1
Jun  1 18:28:12.537: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 18:28:13.237: INFO: rc: 1
Jun  1 18:28:13.237: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 18:28:23.237: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 18:28:23.842: INFO: rc: 1
Jun  1 18:28:23.843: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 18:28:33.846: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 18:28:34.532: INFO: rc: 1
Jun  1 18:28:34.532: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 18:28:44.534: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 18:28:45.157: INFO: rc: 1
Jun  1 18:28:45.157: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 18:28:55.160: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 18:28:55.756: INFO: rc: 1
Jun  1 18:28:55.757: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 18:29:05.758: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 18:29:06.316: INFO: rc: 1
Jun  1 18:29:06.316: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 18:29:16.321: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 18:29:16.989: INFO: rc: 1
Jun  1 18:29:16.989: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 18:29:26.998: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 18:29:27.570: INFO: rc: 1
Jun  1 18:29:27.571: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 18:29:37.573: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 18:29:38.134: INFO: rc: 1
Jun  1 18:29:38.134: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 18:29:48.136: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 18:29:48.736: INFO: rc: 1
Jun  1 18:29:48.737: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 18:29:58.740: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 18:29:59.296: INFO: rc: 1
Jun  1 18:29:59.297: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 18:30:09.299: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 18:30:09.877: INFO: rc: 1
Jun  1 18:30:09.878: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 18:30:19.878: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 18:30:20.609: INFO: rc: 1
Jun  1 18:30:20.609: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 18:30:30.609: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 18:30:31.219: INFO: rc: 1
Jun  1 18:30:31.219: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 18:30:41.221: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 18:30:41.857: INFO: rc: 1
Jun  1 18:30:41.859: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 18:30:51.860: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 18:30:52.510: INFO: rc: 1
Jun  1 18:30:52.510: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 18:31:02.512: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 18:31:03.132: INFO: rc: 1
Jun  1 18:31:03.133: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 18:31:13.136: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 18:31:13.761: INFO: rc: 1
Jun  1 18:31:13.762: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 18:31:23.764: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 18:31:24.341: INFO: rc: 1
Jun  1 18:31:24.341: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 18:31:34.344: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 18:31:34.948: INFO: rc: 1
Jun  1 18:31:34.948: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 18:31:44.950: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 18:31:45.571: INFO: rc: 1
Jun  1 18:31:45.572: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 18:31:55.572: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 18:31:56.153: INFO: rc: 1
Jun  1 18:31:56.153: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 18:32:06.156: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 18:32:06.757: INFO: rc: 1
Jun  1 18:32:06.757: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 18:32:16.757: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 18:32:18.214: INFO: rc: 1
Jun  1 18:32:18.214: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 18:32:28.215: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 18:32:28.873: INFO: rc: 1
Jun  1 18:32:28.873: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 18:32:38.876: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 18:32:39.464: INFO: rc: 1
Jun  1 18:32:39.464: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 18:32:49.465: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 18:32:50.043: INFO: rc: 1
Jun  1 18:32:50.043: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 18:33:00.044: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 18:33:00.596: INFO: rc: 1
Jun  1 18:33:00.597: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 18:33:10.598: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:46423 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-7483 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 18:33:11.169: INFO: rc: 1
Jun  1 18:33:11.170: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: 
Jun  1 18:33:11.170: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
... skipping 13 lines ...
test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/framework/framework.go:592
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":292,"completed":241,"skipped":4067,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-2e2224f9-0178-4c96-b181-c42fc853b6a1
STEP: Creating a pod to test consume configMaps
Jun  1 18:33:11.601: INFO: Waiting up to 5m0s for pod "pod-configmaps-9e946b23-d21f-4d80-8c72-0b66c5a36bbf" in namespace "configmap-9103" to be "Succeeded or Failed"
Jun  1 18:33:11.612: INFO: Pod "pod-configmaps-9e946b23-d21f-4d80-8c72-0b66c5a36bbf": Phase="Pending", Reason="", readiness=false. Elapsed: 11.042269ms
Jun  1 18:33:13.633: INFO: Pod "pod-configmaps-9e946b23-d21f-4d80-8c72-0b66c5a36bbf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032267221s
Jun  1 18:33:15.669: INFO: Pod "pod-configmaps-9e946b23-d21f-4d80-8c72-0b66c5a36bbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068468272s
STEP: Saw pod success
Jun  1 18:33:15.670: INFO: Pod "pod-configmaps-9e946b23-d21f-4d80-8c72-0b66c5a36bbf" satisfied condition "Succeeded or Failed"
Jun  1 18:33:15.696: INFO: Trying to get logs from node kind-worker pod pod-configmaps-9e946b23-d21f-4d80-8c72-0b66c5a36bbf container configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 18:33:15.817: INFO: Waiting for pod pod-configmaps-9e946b23-d21f-4d80-8c72-0b66c5a36bbf to disappear
Jun  1 18:33:15.844: INFO: Pod pod-configmaps-9e946b23-d21f-4d80-8c72-0b66c5a36bbf no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 18:33:15.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9103" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":292,"completed":242,"skipped":4107,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath 
  runs ReplicaSets to verify preemption running path [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 32 lines ...
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  test/e2e/framework/framework.go:175
Jun  1 18:35:03.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-preemption-4491" for this suite.
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  test/e2e/scheduling/preemption.go:75
•{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":292,"completed":243,"skipped":4144,"failed":0}

------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 74 lines ...
Jun  1 18:35:31.230: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2585/pods","resourceVersion":"31179"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Jun  1 18:35:31.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2585" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":292,"completed":244,"skipped":4144,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-89cbe834-4925-4e63-86eb-ad07016d5a9a
STEP: Creating a pod to test consume secrets
Jun  1 18:35:31.472: INFO: Waiting up to 5m0s for pod "pod-secrets-8d8fb2ba-49bb-45a1-947b-9ff311d2eeee" in namespace "secrets-5477" to be "Succeeded or Failed"
Jun  1 18:35:31.480: INFO: Pod "pod-secrets-8d8fb2ba-49bb-45a1-947b-9ff311d2eeee": Phase="Pending", Reason="", readiness=false. Elapsed: 8.102485ms
Jun  1 18:35:33.492: INFO: Pod "pod-secrets-8d8fb2ba-49bb-45a1-947b-9ff311d2eeee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020249555s
Jun  1 18:35:35.501: INFO: Pod "pod-secrets-8d8fb2ba-49bb-45a1-947b-9ff311d2eeee": Phase="Running", Reason="", readiness=true. Elapsed: 4.028811215s
Jun  1 18:35:37.512: INFO: Pod "pod-secrets-8d8fb2ba-49bb-45a1-947b-9ff311d2eeee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.039452172s
STEP: Saw pod success
Jun  1 18:35:37.512: INFO: Pod "pod-secrets-8d8fb2ba-49bb-45a1-947b-9ff311d2eeee" satisfied condition "Succeeded or Failed"
Jun  1 18:35:37.520: INFO: Trying to get logs from node kind-worker pod pod-secrets-8d8fb2ba-49bb-45a1-947b-9ff311d2eeee container secret-volume-test: <nil>
STEP: delete the pod
Jun  1 18:35:37.600: INFO: Waiting for pod pod-secrets-8d8fb2ba-49bb-45a1-947b-9ff311d2eeee to disappear
Jun  1 18:35:37.613: INFO: Pod pod-secrets-8d8fb2ba-49bb-45a1-947b-9ff311d2eeee no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Jun  1 18:35:37.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5477" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":292,"completed":245,"skipped":4175,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 45 lines ...
Jun  1 18:37:38.606: INFO: Waiting for statefulset status.replicas updated to 0
Jun  1 18:37:38.622: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Jun  1 18:37:38.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-319" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":292,"completed":246,"skipped":4203,"failed":0}

------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 58 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
Jun  1 18:37:44.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-320" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":292,"completed":247,"skipped":4203,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 18:37:44.281: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on node default medium
Jun  1 18:37:44.401: INFO: Waiting up to 5m0s for pod "pod-e0bddff7-1980-4e31-9c6b-39015bea94e8" in namespace "emptydir-6839" to be "Succeeded or Failed"
Jun  1 18:37:44.421: INFO: Pod "pod-e0bddff7-1980-4e31-9c6b-39015bea94e8": Phase="Pending", Reason="", readiness=false. Elapsed: 19.839553ms
Jun  1 18:37:46.452: INFO: Pod "pod-e0bddff7-1980-4e31-9c6b-39015bea94e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05092378s
Jun  1 18:37:48.470: INFO: Pod "pod-e0bddff7-1980-4e31-9c6b-39015bea94e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068501513s
STEP: Saw pod success
Jun  1 18:37:48.472: INFO: Pod "pod-e0bddff7-1980-4e31-9c6b-39015bea94e8" satisfied condition "Succeeded or Failed"
Jun  1 18:37:48.489: INFO: Trying to get logs from node kind-worker2 pod pod-e0bddff7-1980-4e31-9c6b-39015bea94e8 container test-container: <nil>
STEP: delete the pod
Jun  1 18:37:48.635: INFO: Waiting for pod pod-e0bddff7-1980-4e31-9c6b-39015bea94e8 to disappear
Jun  1 18:37:48.653: INFO: Pod pod-e0bddff7-1980-4e31-9c6b-39015bea94e8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 18:37:48.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6839" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":248,"skipped":4225,"failed":0}
SSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  1 18:37:48.701: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
Jun  1 18:39:48.925: INFO: Deleting pod "var-expansion-236ce9da-7bac-40c0-83dd-10f54d4ce99c" in namespace "var-expansion-2122"
Jun  1 18:39:48.949: INFO: Wait up to 5m0s for pod "var-expansion-236ce9da-7bac-40c0-83dd-10f54d4ce99c" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Jun  1 18:39:52.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2122" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":292,"completed":249,"skipped":4231,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 13 lines ...
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Jun  1 18:40:10.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4269" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":292,"completed":250,"skipped":4261,"failed":0}

------------------------------
[k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] 
  removing taint cancels eviction [Disruptive] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial]
... skipping 20 lines ...
STEP: Waiting some time to make sure that toleration time passed.
Jun  1 18:42:25.762: INFO: Pod wasn't evicted. Test successful
[AfterEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial]
  test/e2e/framework/framework.go:175
Jun  1 18:42:25.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "taint-single-pod-8651" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]","total":292,"completed":251,"skipped":4261,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 13 lines ...
Jun  1 18:42:27.021: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
Jun  1 18:42:27.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1203" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":292,"completed":252,"skipped":4293,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-216778ee-eea0-4503-a6b5-58c68972b674
STEP: Creating a pod to test consume secrets
Jun  1 18:42:27.285: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-238f6f33-2e9b-42bd-8895-862f6cea3f51" in namespace "projected-8980" to be "Succeeded or Failed"
Jun  1 18:42:27.304: INFO: Pod "pod-projected-secrets-238f6f33-2e9b-42bd-8895-862f6cea3f51": Phase="Pending", Reason="", readiness=false. Elapsed: 18.150702ms
Jun  1 18:42:29.336: INFO: Pod "pod-projected-secrets-238f6f33-2e9b-42bd-8895-862f6cea3f51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050378283s
Jun  1 18:42:31.352: INFO: Pod "pod-projected-secrets-238f6f33-2e9b-42bd-8895-862f6cea3f51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066899329s
STEP: Saw pod success
Jun  1 18:42:31.354: INFO: Pod "pod-projected-secrets-238f6f33-2e9b-42bd-8895-862f6cea3f51" satisfied condition "Succeeded or Failed"
Jun  1 18:42:31.393: INFO: Trying to get logs from node kind-worker2 pod pod-projected-secrets-238f6f33-2e9b-42bd-8895-862f6cea3f51 container projected-secret-volume-test: <nil>
STEP: delete the pod
Jun  1 18:42:31.553: INFO: Waiting for pod pod-projected-secrets-238f6f33-2e9b-42bd-8895-862f6cea3f51 to disappear
Jun  1 18:42:31.573: INFO: Pod pod-projected-secrets-238f6f33-2e9b-42bd-8895-862f6cea3f51 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Jun  1 18:42:31.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8980" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":292,"completed":253,"skipped":4300,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-0095ebeb-03f5-484b-84f5-276c02579125
STEP: Creating a pod to test consume configMaps
Jun  1 18:42:31.790: INFO: Waiting up to 5m0s for pod "pod-configmaps-6271901d-b3c8-4027-a11a-51d14340e26f" in namespace "configmap-8340" to be "Succeeded or Failed"
Jun  1 18:42:31.808: INFO: Pod "pod-configmaps-6271901d-b3c8-4027-a11a-51d14340e26f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.856809ms
Jun  1 18:42:33.824: INFO: Pod "pod-configmaps-6271901d-b3c8-4027-a11a-51d14340e26f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034065696s
Jun  1 18:42:35.833: INFO: Pod "pod-configmaps-6271901d-b3c8-4027-a11a-51d14340e26f": Phase="Running", Reason="", readiness=true. Elapsed: 4.042750463s
Jun  1 18:42:37.855: INFO: Pod "pod-configmaps-6271901d-b3c8-4027-a11a-51d14340e26f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.064630084s
STEP: Saw pod success
Jun  1 18:42:37.855: INFO: Pod "pod-configmaps-6271901d-b3c8-4027-a11a-51d14340e26f" satisfied condition "Succeeded or Failed"
Jun  1 18:42:37.873: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-6271901d-b3c8-4027-a11a-51d14340e26f container configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 18:42:37.937: INFO: Waiting for pod pod-configmaps-6271901d-b3c8-4027-a11a-51d14340e26f to disappear
Jun  1 18:42:37.949: INFO: Pod pod-configmaps-6271901d-b3c8-4027-a11a-51d14340e26f no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 18:42:37.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8340" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":292,"completed":254,"skipped":4319,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 23 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 18:42:52.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2622" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":292,"completed":255,"skipped":4331,"failed":0}
SS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
... skipping 42 lines ...
Jun  1 18:43:08.458: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jun  1 18:43:08.857: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  test/e2e/framework/framework.go:175
Jun  1 18:43:08.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-4096" for this suite.
•{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":256,"skipped":4333,"failed":0}
SSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 26 lines ...
Jun  1 18:43:15.581: INFO: Unable to read jessie_udp@dns-test-service.dns-6326 from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:15.594: INFO: Unable to read jessie_tcp@dns-test-service.dns-6326 from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:15.621: INFO: Unable to read jessie_udp@dns-test-service.dns-6326.svc from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:15.646: INFO: Unable to read jessie_tcp@dns-test-service.dns-6326.svc from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:15.670: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6326.svc from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:15.694: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6326.svc from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:15.852: INFO: Lookups using dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6326 wheezy_tcp@dns-test-service.dns-6326 wheezy_udp@dns-test-service.dns-6326.svc wheezy_tcp@dns-test-service.dns-6326.svc wheezy_udp@_http._tcp.dns-test-service.dns-6326.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6326.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6326 jessie_tcp@dns-test-service.dns-6326 jessie_udp@dns-test-service.dns-6326.svc jessie_tcp@dns-test-service.dns-6326.svc jessie_udp@_http._tcp.dns-test-service.dns-6326.svc jessie_tcp@_http._tcp.dns-test-service.dns-6326.svc]

Jun  1 18:43:20.862: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:20.897: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:20.914: INFO: Unable to read wheezy_udp@dns-test-service.dns-6326 from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:20.936: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6326 from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:20.953: INFO: Unable to read wheezy_udp@dns-test-service.dns-6326.svc from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:20.977: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6326.svc from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:21.223: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:21.262: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:21.276: INFO: Unable to read jessie_udp@dns-test-service.dns-6326 from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:21.305: INFO: Unable to read jessie_tcp@dns-test-service.dns-6326 from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:21.324: INFO: Unable to read jessie_udp@dns-test-service.dns-6326.svc from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:21.336: INFO: Unable to read jessie_tcp@dns-test-service.dns-6326.svc from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:21.506: INFO: Lookups using dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6326 wheezy_tcp@dns-test-service.dns-6326 wheezy_udp@dns-test-service.dns-6326.svc wheezy_tcp@dns-test-service.dns-6326.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6326 jessie_tcp@dns-test-service.dns-6326 jessie_udp@dns-test-service.dns-6326.svc jessie_tcp@dns-test-service.dns-6326.svc]

Jun  1 18:43:25.872: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:25.880: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:25.897: INFO: Unable to read wheezy_udp@dns-test-service.dns-6326 from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:25.912: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6326 from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:25.925: INFO: Unable to read wheezy_udp@dns-test-service.dns-6326.svc from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:25.937: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6326.svc from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:26.055: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:26.069: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:26.077: INFO: Unable to read jessie_udp@dns-test-service.dns-6326 from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:26.084: INFO: Unable to read jessie_tcp@dns-test-service.dns-6326 from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:26.096: INFO: Unable to read jessie_udp@dns-test-service.dns-6326.svc from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:26.106: INFO: Unable to read jessie_tcp@dns-test-service.dns-6326.svc from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:26.181: INFO: Lookups using dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6326 wheezy_tcp@dns-test-service.dns-6326 wheezy_udp@dns-test-service.dns-6326.svc wheezy_tcp@dns-test-service.dns-6326.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6326 jessie_tcp@dns-test-service.dns-6326 jessie_udp@dns-test-service.dns-6326.svc jessie_tcp@dns-test-service.dns-6326.svc]

Jun  1 18:43:30.868: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:30.888: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:30.905: INFO: Unable to read wheezy_udp@dns-test-service.dns-6326 from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:30.917: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6326 from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:30.929: INFO: Unable to read wheezy_udp@dns-test-service.dns-6326.svc from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:30.934: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6326.svc from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:31.042: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:31.069: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:31.094: INFO: Unable to read jessie_udp@dns-test-service.dns-6326 from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:31.130: INFO: Unable to read jessie_tcp@dns-test-service.dns-6326 from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:31.149: INFO: Unable to read jessie_udp@dns-test-service.dns-6326.svc from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:31.174: INFO: Unable to read jessie_tcp@dns-test-service.dns-6326.svc from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:31.281: INFO: Lookups using dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6326 wheezy_tcp@dns-test-service.dns-6326 wheezy_udp@dns-test-service.dns-6326.svc wheezy_tcp@dns-test-service.dns-6326.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6326 jessie_tcp@dns-test-service.dns-6326 jessie_udp@dns-test-service.dns-6326.svc jessie_tcp@dns-test-service.dns-6326.svc]

Jun  1 18:43:35.865: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:35.878: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:35.899: INFO: Unable to read wheezy_udp@dns-test-service.dns-6326 from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:35.918: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6326 from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:35.941: INFO: Unable to read wheezy_udp@dns-test-service.dns-6326.svc from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:35.960: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6326.svc from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:36.064: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:36.069: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:36.078: INFO: Unable to read jessie_udp@dns-test-service.dns-6326 from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:36.085: INFO: Unable to read jessie_tcp@dns-test-service.dns-6326 from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:36.092: INFO: Unable to read jessie_udp@dns-test-service.dns-6326.svc from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:36.101: INFO: Unable to read jessie_tcp@dns-test-service.dns-6326.svc from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:36.184: INFO: Lookups using dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6326 wheezy_tcp@dns-test-service.dns-6326 wheezy_udp@dns-test-service.dns-6326.svc wheezy_tcp@dns-test-service.dns-6326.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6326 jessie_tcp@dns-test-service.dns-6326 jessie_udp@dns-test-service.dns-6326.svc jessie_tcp@dns-test-service.dns-6326.svc]

Jun  1 18:43:40.876: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:40.894: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:40.910: INFO: Unable to read wheezy_udp@dns-test-service.dns-6326 from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:40.921: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6326 from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:40.945: INFO: Unable to read wheezy_udp@dns-test-service.dns-6326.svc from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:40.971: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6326.svc from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:41.113: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:41.124: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:41.141: INFO: Unable to read jessie_udp@dns-test-service.dns-6326 from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:41.156: INFO: Unable to read jessie_tcp@dns-test-service.dns-6326 from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:41.162: INFO: Unable to read jessie_udp@dns-test-service.dns-6326.svc from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:41.169: INFO: Unable to read jessie_tcp@dns-test-service.dns-6326.svc from pod dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71: the server could not find the requested resource (get pods dns-test-517372fa-50c9-4774-bac8-e698634bac71)
Jun  1 18:43:41.250: INFO: Lookups using dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6326 wheezy_tcp@dns-test-service.dns-6326 wheezy_udp@dns-test-service.dns-6326.svc wheezy_tcp@dns-test-service.dns-6326.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6326 jessie_tcp@dns-test-service.dns-6326 jessie_udp@dns-test-service.dns-6326.svc jessie_tcp@dns-test-service.dns-6326.svc]

Jun  1 18:43:46.389: INFO: DNS probes using dns-6326/dns-test-517372fa-50c9-4774-bac8-e698634bac71 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Jun  1 18:43:46.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6326" for this suite.
•{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":292,"completed":257,"skipped":4341,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 7 lines ...
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod busybox-b011a431-5c0b-4ebb-9479-71ea557b31c2 in namespace container-probe-304
Jun  1 18:43:50.950: INFO: Started pod busybox-b011a431-5c0b-4ebb-9479-71ea557b31c2 in namespace container-probe-304
STEP: checking the pod's current state and verifying that restartCount is present
Jun  1 18:43:50.963: INFO: Initial restart count of pod busybox-b011a431-5c0b-4ebb-9479-71ea557b31c2 is 0
{"component":"entrypoint","file":"prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","time":"2020-06-01T18:44:04Z"}
{"component":"entrypoint","file":"prow/entrypoint/run.go:245","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15s grace period","time":"2020-06-01T18:44:19Z"}