This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2020-05-31 17:30
Elapsed2h0m
Revisionmaster
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/a77a5a5a-bb8f-44b2-ab4e-47ba426f9283/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/a77a5a5a-bb8f-44b2-ab4e-47ba426f9283/targets/test

No Test Failures!


Error lines from build-log.txt

... skipping 69 lines ...
Analyzing: 4 targets (20 packages loaded, 27 targets configured)
Analyzing: 4 targets (328 packages loaded, 5257 targets configured)
Analyzing: 4 targets (1443 packages loaded, 11584 targets configured)
Analyzing: 4 targets (2268 packages loaded, 15447 targets configured)
Analyzing: 4 targets (2268 packages loaded, 15447 targets configured)
Analyzing: 4 targets (2269 packages loaded, 15447 targets configured)
DEBUG: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/bazel_gazelle/internal/go_repository.bzl:184:13: org_golang_x_tools: gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go:2:43: expected 'package', found 'EOF'
gazelle: found packages imports (imports.go) and lib (issue27856.go) in /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/internal/gccgoimporter/testdata
gazelle: found packages exports (exports.go) and p (issue15920.go) in /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/internal/gcimporter/testdata
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go:1:34: expected 'package', found 'EOF'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go:1:16: expected ';', found '.'
gazelle: finding module path for import domain.name/importdecl: exit status 1: can't load package: package domain.name/importdecl: cannot find module providing package domain.name/importdecl
gazelle: finding module path for import old.com/one: exit status 1: can't load package: package old.com/one: cannot find module providing package old.com/one
gazelle: finding module path for import titanic.biz/bar: exit status 1: can't load package: package titanic.biz/bar: cannot find module providing package titanic.biz/bar
gazelle: finding module path for import titanic.biz/foo: exit status 1: can't load package: package titanic.biz/foo: cannot find module providing package titanic.biz/foo
gazelle: finding module path for import fruit.io/pear: exit status 1: can't load package: package fruit.io/pear: cannot find module providing package fruit.io/pear
gazelle: finding module path for import fruit.io/banana: exit status 1: can't load package: package fruit.io/banana: cannot find module providing package fruit.io/banana
... skipping 156 lines ...
WARNING: Waiting for server process to terminate (waited 10 seconds, waiting at most 60)
WARNING: Waiting for server process to terminate (waited 30 seconds, waiting at most 60)
INFO: Waited 60 seconds for server process (pid=5774) to terminate.
WARNING: Waiting for server process to terminate (waited 5 seconds, waiting at most 10)
WARNING: Waiting for server process to terminate (waited 10 seconds, waiting at most 10)
INFO: Waited 10 seconds for server process (pid=5774) to terminate.
FATAL: Attempted to kill stale server process (pid=5774) using SIGKILL, but it did not die in a timely fashion.
+ true
+ pkill ^bazel
+ true
+ mkdir -p _output/bin/
+ cp bazel-bin/test/e2e/e2e.test _output/bin/
+ find /home/prow/go/src/k8s.io/kubernetes/bazel-bin/ -name kubectl -type f
... skipping 46 lines ...
localAPIEndpoint:
  advertiseAddress: 172.18.0.3
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.3
---
apiVersion: kubeadm.k8s.io/v1beta2
controlPlane:
  localAPIEndpoint:
    advertiseAddress: 172.18.0.3
... skipping 4 lines ...
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.3
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
... skipping 32 lines ...
localAPIEndpoint:
  advertiseAddress: 172.18.0.4
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.4
---
apiVersion: kubeadm.k8s.io/v1beta2
discovery:
  bootstrapToken:
    apiServerEndpoint: kind-control-plane:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.4
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
... skipping 32 lines ...
localAPIEndpoint:
  advertiseAddress: 172.18.0.2
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.2
---
apiVersion: kubeadm.k8s.io/v1beta2
discovery:
  bootstrapToken:
    apiServerEndpoint: kind-control-plane:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.2
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
... skipping 37 lines ...
I0531 17:39:49.591632     224 checks.go:376] validating the presence of executable ebtables
I0531 17:39:49.592008     224 checks.go:376] validating the presence of executable ethtool
I0531 17:39:49.592028     224 checks.go:376] validating the presence of executable socat
I0531 17:39:49.592059     224 checks.go:376] validating the presence of executable tc
I0531 17:39:49.592079     224 checks.go:376] validating the presence of executable touch
I0531 17:39:49.592122     224 checks.go:520] running all checks
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0531 17:39:49.626193     224 checks.go:406] checking whether the given node name is reachable using net.LookupHost
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
... skipping 99 lines ...
I0531 17:40:05.095291     224 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 27 milliseconds
I0531 17:40:05.592577     224 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 24 milliseconds
I0531 17:40:06.109540     224 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 40 milliseconds
I0531 17:40:06.590215     224 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 22 milliseconds
I0531 17:40:07.092668     224 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 24 milliseconds
I0531 17:40:07.588551     224 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 18 milliseconds
I0531 17:40:17.665347     224 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 9597 milliseconds
I0531 17:40:18.070454     224 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 2 milliseconds
I0531 17:40:18.572042     224 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 4 milliseconds
I0531 17:40:19.069235     224 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 1 milliseconds
I0531 17:40:19.569001     224 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 200 OK in 1 milliseconds
[apiclient] All control plane components are healthy after 24.049583 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0531 17:40:19.570393     224 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
I0531 17:40:19.583317     224 round_trippers.go:443] POST https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 6 milliseconds
I0531 17:40:19.588628     224 round_trippers.go:443] POST https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 4 milliseconds
... skipping 108 lines ...
I0531 17:40:33.388014     587 checks.go:376] validating the presence of executable ebtables
I0531 17:40:33.388705     587 checks.go:376] validating the presence of executable ethtool
I0531 17:40:33.389316     587 checks.go:376] validating the presence of executable socat
I0531 17:40:33.389368     587 checks.go:376] validating the presence of executable tc
I0531 17:40:33.389398     587 checks.go:376] validating the presence of executable touch
I0531 17:40:33.389442     587 checks.go:520] running all checks
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0531 17:40:33.418404     587 checks.go:406] checking whether the given node name is reachable using net.LookupHost
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
... skipping 79 lines ...
I0531 17:40:33.427393     586 checks.go:376] validating the presence of executable ebtables
I0531 17:40:33.427427     586 checks.go:376] validating the presence of executable ethtool
I0531 17:40:33.427451     586 checks.go:376] validating the presence of executable socat
I0531 17:40:33.427491     586 checks.go:376] validating the presence of executable tc
I0531 17:40:33.427516     586 checks.go:376] validating the presence of executable touch
I0531 17:40:33.427568     586 checks.go:520] running all checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_PIDS: enabled
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0531 17:40:33.455394     586 checks.go:406] checking whether the given node name is reachable using net.LookupHost
I0531 17:40:33.474843     586 checks.go:618] validating kubelet version
I0531 17:40:33.775184     586 checks.go:128] validating if the "kubelet" service is enabled and active
I0531 17:40:33.809176     586 checks.go:201] validating availability of port 10250
I0531 17:40:33.809434     586 checks.go:286] validating the existence of file /etc/kubernetes/pki/ca.crt
I0531 17:40:33.809474     586 checks.go:432] validating if the connectivity type is via proxy or direct
... skipping 72 lines ...
+ GINKGO_PID=11185
+ wait 11185
+ ./hack/ginkgo-e2e.sh --provider=skeleton --num-nodes=2 --ginkgo.focus=\[Conformance\] --ginkgo.skip= --report-dir=/logs/artifacts --disable-log-dump=true
Conformance test: not doing test setup.
I0531 17:41:15.146361   11886 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
I0531 17:41:15.146534   11886 e2e.go:129] Starting e2e run "7fb5b2ea-5576-419c-b266-d23cd1deacf0" on Ginkgo node 1
{"msg":"Test Suite starting","total":292,"completed":0,"skipped":0,"failed":0}
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1590946873 - Will randomize all specs
Will run 292 of 5101 specs

May 31 17:41:15.199: INFO: >>> kubeConfig: /root/.kube/kind-test-config
... skipping 22 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 31 17:41:15.332: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3fb2ed3c-3da0-46c5-bff8-e41c4b737795" in namespace "downward-api-2218" to be "Succeeded or Failed"
May 31 17:41:15.339: INFO: Pod "downwardapi-volume-3fb2ed3c-3da0-46c5-bff8-e41c4b737795": Phase="Pending", Reason="", readiness=false. Elapsed: 6.62749ms
May 31 17:41:17.346: INFO: Pod "downwardapi-volume-3fb2ed3c-3da0-46c5-bff8-e41c4b737795": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013942562s
May 31 17:41:19.351: INFO: Pod "downwardapi-volume-3fb2ed3c-3da0-46c5-bff8-e41c4b737795": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018100326s
May 31 17:41:21.436: INFO: Pod "downwardapi-volume-3fb2ed3c-3da0-46c5-bff8-e41c4b737795": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103682795s
May 31 17:41:23.446: INFO: Pod "downwardapi-volume-3fb2ed3c-3da0-46c5-bff8-e41c4b737795": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.113520121s
STEP: Saw pod success
May 31 17:41:23.446: INFO: Pod "downwardapi-volume-3fb2ed3c-3da0-46c5-bff8-e41c4b737795" satisfied condition "Succeeded or Failed"
May 31 17:41:23.452: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-3fb2ed3c-3da0-46c5-bff8-e41c4b737795 container client-container: <nil>
STEP: delete the pod
May 31 17:41:23.498: INFO: Waiting for pod downwardapi-volume-3fb2ed3c-3da0-46c5-bff8-e41c4b737795 to disappear
May 31 17:41:23.502: INFO: Pod downwardapi-volume-3fb2ed3c-3da0-46c5-bff8-e41c4b737795 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
May 31 17:41:23.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2218" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":292,"completed":1,"skipped":14,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
STEP: Updating configmap configmap-test-upd-0f4135d9-b241-4747-82b7-9823765fdfaf
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
May 31 17:42:34.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3842" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":2,"skipped":44,"failed":0}
SSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 22 lines ...
May 31 17:42:38.388: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
May 31 17:42:38.388: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:42753 --kubeconfig=/root/.kube/kind-test-config describe pod agnhost-master-l9b6w --namespace=kubectl-1031'
May 31 17:42:38.728: INFO: stderr: ""
May 31 17:42:38.728: INFO: stdout: "Name:         agnhost-master-l9b6w\nNamespace:    kubectl-1031\nPriority:     0\nNode:         kind-worker2/172.18.0.2\nStart Time:   Sun, 31 May 2020 17:42:35 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  <none>\nStatus:       Running\nIP:           10.244.2.4\nIPs:\n  IP:           10.244.2.4\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://f336512355e97d8c32672c5719cb5a33b2c6018a82b78a31aff50f7cb3421023\n    Image:          us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n    Image ID:       us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sun, 31 May 2020 17:42:37 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-jh7fh (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-jh7fh:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-jh7fh\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  <none>\nTolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From                   Message\n  ----    ------     ----  ----                   -------\n  Normal  Scheduled  3s    default-scheduler      Successfully assigned kubectl-1031/agnhost-master-l9b6w to kind-worker2\n  Normal  Pulled     1s    kubelet, kind-worker2  Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\" already present on machine\n  Normal  Created    1s    kubelet, kind-worker2  Created container agnhost-master\n  Normal  Started    1s    kubelet, kind-worker2  Started container agnhost-master\n"
May 31 17:42:38.728: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:42753 --kubeconfig=/root/.kube/kind-test-config describe rc agnhost-master --namespace=kubectl-1031'
May 31 17:42:39.096: INFO: stderr: ""
May 31 17:42:39.096: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-1031\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  4s    replication-controller  Created pod: agnhost-master-l9b6w\n"
May 31 17:42:39.096: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:42753 --kubeconfig=/root/.kube/kind-test-config describe service agnhost-master --namespace=kubectl-1031'
May 31 17:42:39.431: INFO: stderr: ""
May 31 17:42:39.431: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-1031\nLabels:            app=agnhost\n                   role=master\nAnnotations:       <none>\nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.109.219.224\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.244.2.4:6379\nSession Affinity:  None\nEvents:            <none>\n"
May 31 17:42:39.436: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:42753 --kubeconfig=/root/.kube/kind-test-config describe node kind-control-plane'
May 31 17:42:39.820: INFO: stderr: ""
May 31 17:42:39.820: INFO: stdout: "Name:               kind-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=kind-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 31 May 2020 17:40:17 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  kind-control-plane\n  AcquireTime:     <unset>\n  RenewTime:       Sun, 31 May 2020 17:42:37 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Sun, 31 May 2020 17:40:57 +0000   Sun, 31 May 2020 17:40:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Sun, 31 May 2020 17:40:57 +0000   Sun, 31 May 2020 17:40:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Sun, 31 May 2020 17:40:57 +0000   Sun, 31 May 2020 17:40:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Sun, 31 May 2020 17:40:57 +0000   Sun, 31 May 2020 17:40:57 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.3\n  Hostname:    kind-control-plane\nCapacity:\n  cpu:                8\n  ephemeral-storage:  253882800Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             53582972Ki\n  pods:               110\nAllocatable:\n  cpu:                8\n  ephemeral-storage:  253882800Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             53582972Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 69320a11dfac48a0b45b32414eb0c62a\n  System UUID:                0626d099-9298-49a0-a3c0-92c18d7b4b0c\n  Boot ID:                    17878ff9-0f01-4f08-b06d-17fcaca6fed2\n  Kernel Version:             4.15.0-1044-gke\n  OS Image:                   Ubuntu 20.04 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.3.4-12-g1e902b2d\n  Kubelet Version:            v1.19.0-beta.0.313+46d08c89ab9f55\n  Kube-Proxy Version:         v1.19.0-beta.0.313+46d08c89ab9f55\nPodCIDR:                      10.244.0.0/24\nPodCIDRs:                     10.244.0.0/24\nNon-terminated Pods:          (8 in total)\n  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---\n  kube-system                 coredns-66bff467f8-hzpdg                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m3s\n  kube-system                 etcd-kind-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s\n  kube-system                 kindnet-6fpcp                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m4s\n  kube-system                 kube-apiserver-kind-control-plane             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m12s\n  kube-system                 kube-controller-manager-kind-control-plane    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m12s\n  kube-system                 kube-proxy-v4m89                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s\n  kube-system                 kube-scheduler-kind-control-plane             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m12s\n  local-path-storage          local-path-provisioner-bd4bb6b75-54d9w        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                750m (9%)   100m (1%)\n  memory             120Mi (0%)  220Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\n  hugepages-1Gi      0 (0%)      0 (0%)\n  hugepages-2Mi      0 (0%)      0 (0%)\nEvents:\n  Type     Reason                    Age                    From                            Message\n  ----     ------                    ----                   ----                            -------\n  Normal   NodeHasSufficientMemory   2m37s (x5 over 2m37s)  kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasSufficientMemory\n  Normal   NodeHasNoDiskPressure     2m37s (x5 over 2m37s)  kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID      2m37s (x4 over 2m37s)  kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasSufficientPID\n  Normal   Starting                  2m13s                  kubelet, kind-control-plane     Starting kubelet.\n  Normal   NodeHasSufficientMemory   2m13s                  kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasSufficientMemory\n  Normal   NodeHasNoDiskPressure     2m13s                  kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID      2m13s                  kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasSufficientPID\n  Warning  CheckLimitsForResolvConf  2m12s                  kubelet, kind-control-plane     Resolv.conf file '/etc/resolv.conf' contains search line consisting of more than 3 domains!\n  Normal   NodeAllocatableEnforced   2m12s                  kubelet, kind-control-plane     Updated Node Allocatable limit across pods\n  Normal   Starting                  119s                   kube-proxy, kind-control-plane  Starting kube-proxy.\n  Normal   NodeReady                 102s                   kubelet, kind-control-plane     Node kind-control-plane status is now: NodeReady\n"
May 31 17:42:39.820: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:42753 --kubeconfig=/root/.kube/kind-test-config describe namespace kubectl-1031'
May 31 17:42:40.187: INFO: stderr: ""
May 31 17:42:40.187: INFO: stdout: "Name:         kubectl-1031\nLabels:       e2e-framework=kubectl\n              e2e-run=7fb5b2ea-5576-419c-b266-d23cd1deacf0\nAnnotations:  <none>\nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
May 31 17:42:40.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1031" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":292,"completed":3,"skipped":50,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 54 lines ...
May 31 17:43:06.482: INFO: stderr: ""
May 31 17:43:06.482: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
May 31 17:43:06.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6875" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":292,"completed":4,"skipped":106,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 26 lines ...
May 31 17:43:22.779: INFO: Unable to read jessie_udp@dns-test-service.dns-3567 from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
May 31 17:43:22.783: INFO: Unable to read jessie_tcp@dns-test-service.dns-3567 from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
May 31 17:43:22.787: INFO: Unable to read jessie_udp@dns-test-service.dns-3567.svc from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
May 31 17:43:22.791: INFO: Unable to read jessie_tcp@dns-test-service.dns-3567.svc from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
May 31 17:43:22.800: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3567.svc from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
May 31 17:43:22.806: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3567.svc from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
May 31 17:43:22.828: INFO: Lookups using dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3567 wheezy_tcp@dns-test-service.dns-3567 wheezy_udp@dns-test-service.dns-3567.svc wheezy_tcp@dns-test-service.dns-3567.svc wheezy_udp@_http._tcp.dns-test-service.dns-3567.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3567.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3567 jessie_tcp@dns-test-service.dns-3567 jessie_udp@dns-test-service.dns-3567.svc jessie_tcp@dns-test-service.dns-3567.svc jessie_udp@_http._tcp.dns-test-service.dns-3567.svc jessie_tcp@_http._tcp.dns-test-service.dns-3567.svc]

May 31 17:43:27.839: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
May 31 17:43:27.847: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
May 31 17:43:27.851: INFO: Unable to read wheezy_udp@dns-test-service.dns-3567 from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
May 31 17:43:27.856: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3567 from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
May 31 17:43:27.860: INFO: Unable to read wheezy_udp@dns-test-service.dns-3567.svc from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
... skipping 5 lines ...
May 31 17:43:27.941: INFO: Unable to read jessie_udp@dns-test-service.dns-3567 from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
May 31 17:43:27.946: INFO: Unable to read jessie_tcp@dns-test-service.dns-3567 from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
May 31 17:43:27.950: INFO: Unable to read jessie_udp@dns-test-service.dns-3567.svc from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
May 31 17:43:27.955: INFO: Unable to read jessie_tcp@dns-test-service.dns-3567.svc from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
May 31 17:43:27.959: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3567.svc from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
May 31 17:43:27.963: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3567.svc from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
May 31 17:43:28.051: INFO: Lookups using dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3567 wheezy_tcp@dns-test-service.dns-3567 wheezy_udp@dns-test-service.dns-3567.svc wheezy_tcp@dns-test-service.dns-3567.svc wheezy_udp@_http._tcp.dns-test-service.dns-3567.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3567.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3567 jessie_tcp@dns-test-service.dns-3567 jessie_udp@dns-test-service.dns-3567.svc jessie_tcp@dns-test-service.dns-3567.svc jessie_udp@_http._tcp.dns-test-service.dns-3567.svc jessie_tcp@_http._tcp.dns-test-service.dns-3567.svc]

May 31 17:43:32.834: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
May 31 17:43:32.839: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
May 31 17:43:32.844: INFO: Unable to read wheezy_udp@dns-test-service.dns-3567 from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
May 31 17:43:32.851: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3567 from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
May 31 17:43:32.855: INFO: Unable to read wheezy_udp@dns-test-service.dns-3567.svc from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
... skipping 5 lines ...
May 31 17:43:32.912: INFO: Unable to read jessie_udp@dns-test-service.dns-3567 from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
May 31 17:43:32.918: INFO: Unable to read jessie_tcp@dns-test-service.dns-3567 from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
May 31 17:43:32.923: INFO: Unable to read jessie_udp@dns-test-service.dns-3567.svc from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
May 31 17:43:32.927: INFO: Unable to read jessie_tcp@dns-test-service.dns-3567.svc from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
May 31 17:43:32.931: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3567.svc from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
May 31 17:43:32.936: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3567.svc from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
May 31 17:43:32.964: INFO: Lookups using dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3567 wheezy_tcp@dns-test-service.dns-3567 wheezy_udp@dns-test-service.dns-3567.svc wheezy_tcp@dns-test-service.dns-3567.svc wheezy_udp@_http._tcp.dns-test-service.dns-3567.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3567.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3567 jessie_tcp@dns-test-service.dns-3567 jessie_udp@dns-test-service.dns-3567.svc jessie_tcp@dns-test-service.dns-3567.svc jessie_udp@_http._tcp.dns-test-service.dns-3567.svc jessie_tcp@_http._tcp.dns-test-service.dns-3567.svc]

May 31 17:43:37.832: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
May 31 17:43:37.837: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
May 31 17:43:37.843: INFO: Unable to read wheezy_udp@dns-test-service.dns-3567 from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
May 31 17:43:37.848: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3567 from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
May 31 17:43:37.852: INFO: Unable to read wheezy_udp@dns-test-service.dns-3567.svc from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
... skipping 5 lines ...
May 31 17:43:37.904: INFO: Unable to read jessie_udp@dns-test-service.dns-3567 from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
May 31 17:43:37.908: INFO: Unable to read jessie_tcp@dns-test-service.dns-3567 from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
May 31 17:43:37.911: INFO: Unable to read jessie_udp@dns-test-service.dns-3567.svc from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
May 31 17:43:37.915: INFO: Unable to read jessie_tcp@dns-test-service.dns-3567.svc from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
May 31 17:43:37.919: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3567.svc from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
May 31 17:43:37.923: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3567.svc from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
May 31 17:43:37.945: INFO: Lookups using dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3567 wheezy_tcp@dns-test-service.dns-3567 wheezy_udp@dns-test-service.dns-3567.svc wheezy_tcp@dns-test-service.dns-3567.svc wheezy_udp@_http._tcp.dns-test-service.dns-3567.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3567.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3567 jessie_tcp@dns-test-service.dns-3567 jessie_udp@dns-test-service.dns-3567.svc jessie_tcp@dns-test-service.dns-3567.svc jessie_udp@_http._tcp.dns-test-service.dns-3567.svc jessie_tcp@_http._tcp.dns-test-service.dns-3567.svc]

May 31 17:43:42.871: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3567.svc from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
May 31 17:43:42.928: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3567.svc from pod dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c: the server could not find the requested resource (get pods dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c)
May 31 17:43:42.971: INFO: Lookups using dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c failed for: [wheezy_tcp@_http._tcp.dns-test-service.dns-3567.svc jessie_udp@_http._tcp.dns-test-service.dns-3567.svc]

May 31 17:43:47.970: INFO: DNS probes using dns-3567/dns-test-65c7b76f-a8e1-44a3-acc4-795dc39f880c succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
May 31 17:43:48.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3567" for this suite.
•{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":292,"completed":5,"skipped":130,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should have session affinity work for NodePort service [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 52 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
May 31 17:44:16.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6087" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":292,"completed":6,"skipped":153,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
May 31 17:44:16.911: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on tmpfs
May 31 17:44:16.979: INFO: Waiting up to 5m0s for pod "pod-bba64f7e-f941-4c18-80c9-00dcb3f9ceea" in namespace "emptydir-3992" to be "Succeeded or Failed"
May 31 17:44:16.984: INFO: Pod "pod-bba64f7e-f941-4c18-80c9-00dcb3f9ceea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.889861ms
May 31 17:44:18.992: INFO: Pod "pod-bba64f7e-f941-4c18-80c9-00dcb3f9ceea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013250179s
May 31 17:44:21.006: INFO: Pod "pod-bba64f7e-f941-4c18-80c9-00dcb3f9ceea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027652812s
STEP: Saw pod success
May 31 17:44:21.007: INFO: Pod "pod-bba64f7e-f941-4c18-80c9-00dcb3f9ceea" satisfied condition "Succeeded or Failed"
May 31 17:44:21.015: INFO: Trying to get logs from node kind-worker2 pod pod-bba64f7e-f941-4c18-80c9-00dcb3f9ceea container test-container: <nil>
STEP: delete the pod
May 31 17:44:21.050: INFO: Waiting for pod pod-bba64f7e-f941-4c18-80c9-00dcb3f9ceea to disappear
May 31 17:44:21.056: INFO: Pod pod-bba64f7e-f941-4c18-80c9-00dcb3f9ceea no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
May 31 17:44:21.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3992" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":7,"skipped":155,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 44 lines ...
May 31 17:45:01.464: INFO: Deleting pod "simpletest.rc-s9wrt" in namespace "gc-8173"
May 31 17:45:01.502: INFO: Deleting pod "simpletest.rc-vjdz2" in namespace "gc-8173"
[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
May 31 17:45:01.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8173" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":292,"completed":8,"skipped":175,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 31 17:45:01.709: INFO: Waiting up to 5m0s for pod "downwardapi-volume-41883b2b-388f-4f13-98d9-9f2e1cf6041d" in namespace "projected-9916" to be "Succeeded or Failed"
May 31 17:45:01.736: INFO: Pod "downwardapi-volume-41883b2b-388f-4f13-98d9-9f2e1cf6041d": Phase="Pending", Reason="", readiness=false. Elapsed: 26.815641ms
May 31 17:45:03.747: INFO: Pod "downwardapi-volume-41883b2b-388f-4f13-98d9-9f2e1cf6041d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037693875s
May 31 17:45:05.754: INFO: Pod "downwardapi-volume-41883b2b-388f-4f13-98d9-9f2e1cf6041d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045170085s
STEP: Saw pod success
May 31 17:45:05.755: INFO: Pod "downwardapi-volume-41883b2b-388f-4f13-98d9-9f2e1cf6041d" satisfied condition "Succeeded or Failed"
May 31 17:45:05.760: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-41883b2b-388f-4f13-98d9-9f2e1cf6041d container client-container: <nil>
STEP: delete the pod
May 31 17:45:05.795: INFO: Waiting for pod downwardapi-volume-41883b2b-388f-4f13-98d9-9f2e1cf6041d to disappear
May 31 17:45:05.802: INFO: Pod downwardapi-volume-41883b2b-388f-4f13-98d9-9f2e1cf6041d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
May 31 17:45:05.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9916" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":292,"completed":9,"skipped":202,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
May 31 17:45:05.820: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on tmpfs
May 31 17:45:05.883: INFO: Waiting up to 5m0s for pod "pod-90ad8a96-7ec8-4296-b59a-a703d2e446fc" in namespace "emptydir-6718" to be "Succeeded or Failed"
May 31 17:45:05.890: INFO: Pod "pod-90ad8a96-7ec8-4296-b59a-a703d2e446fc": Phase="Pending", Reason="", readiness=false. Elapsed: 7.473848ms
May 31 17:45:07.898: INFO: Pod "pod-90ad8a96-7ec8-4296-b59a-a703d2e446fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01491398s
May 31 17:45:09.908: INFO: Pod "pod-90ad8a96-7ec8-4296-b59a-a703d2e446fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025198709s
STEP: Saw pod success
May 31 17:45:09.911: INFO: Pod "pod-90ad8a96-7ec8-4296-b59a-a703d2e446fc" satisfied condition "Succeeded or Failed"
May 31 17:45:09.920: INFO: Trying to get logs from node kind-worker2 pod pod-90ad8a96-7ec8-4296-b59a-a703d2e446fc container test-container: <nil>
STEP: delete the pod
May 31 17:45:09.964: INFO: Waiting for pod pod-90ad8a96-7ec8-4296-b59a-a703d2e446fc to disappear
May 31 17:45:09.971: INFO: Pod pod-90ad8a96-7ec8-4296-b59a-a703d2e446fc no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
May 31 17:45:09.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6718" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":10,"skipped":217,"failed":0}
SSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 32 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
May 31 17:45:18.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8939" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":292,"completed":11,"skipped":224,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 9 lines ...
STEP: Creating the pod
May 31 17:45:22.835: INFO: Successfully updated pod "annotationupdate13b31342-dd2d-42cf-9025-691747e6b090"
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
May 31 17:45:24.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5980" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":292,"completed":12,"skipped":237,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 31 17:45:24.863: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename webhook
... skipping 6 lines ...
STEP: Wait for the deployment to be ready
May 31 17:45:25.987: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May 31 17:45:28.012: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726543925, loc:(*time.Location)(0x8006d20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726543925, loc:(*time.Location)(0x8006d20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726543926, loc:(*time.Location)(0x8006d20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726543925, loc:(*time.Location)(0x8006d20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 31 17:45:31.026: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  test/e2e/framework/framework.go:597
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
May 31 17:45:31.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8956" for this suite.
STEP: Destroying namespace "webhook-8956-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":292,"completed":13,"skipped":250,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-568eec01-cf7b-46c1-90f9-a36ebb79d213
STEP: Creating a pod to test consume configMaps
May 31 17:45:31.424: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c0d596a9-0503-4a51-9d34-a48ef383a965" in namespace "projected-1863" to be "Succeeded or Failed"
May 31 17:45:31.431: INFO: Pod "pod-projected-configmaps-c0d596a9-0503-4a51-9d34-a48ef383a965": Phase="Pending", Reason="", readiness=false. Elapsed: 7.165159ms
May 31 17:45:33.442: INFO: Pod "pod-projected-configmaps-c0d596a9-0503-4a51-9d34-a48ef383a965": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018157163s
May 31 17:45:35.446: INFO: Pod "pod-projected-configmaps-c0d596a9-0503-4a51-9d34-a48ef383a965": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021513469s
STEP: Saw pod success
May 31 17:45:35.446: INFO: Pod "pod-projected-configmaps-c0d596a9-0503-4a51-9d34-a48ef383a965" satisfied condition "Succeeded or Failed"
May 31 17:45:35.450: INFO: Trying to get logs from node kind-worker2 pod pod-projected-configmaps-c0d596a9-0503-4a51-9d34-a48ef383a965 container projected-configmap-volume-test: <nil>
STEP: delete the pod
May 31 17:45:35.474: INFO: Waiting for pod pod-projected-configmaps-c0d596a9-0503-4a51-9d34-a48ef383a965 to disappear
May 31 17:45:35.478: INFO: Pod pod-projected-configmaps-c0d596a9-0503-4a51-9d34-a48ef383a965 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
May 31 17:45:35.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1863" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":292,"completed":14,"skipped":255,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 27 lines ...
May 31 17:45:57.611: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
May 31 17:45:57.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4553" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":292,"completed":15,"skipped":295,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] PodTemplates 
  should run the lifecycle of PodTemplates [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] PodTemplates
... skipping 5 lines ...
[It] should run the lifecycle of PodTemplates [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [sig-node] PodTemplates
  test/e2e/framework/framework.go:175
May 31 17:45:57.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-9641" for this suite.
•{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":292,"completed":16,"skipped":316,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-efe43785-1956-42cc-a0f9-1dc6f63d905e
STEP: Creating a pod to test consume secrets
May 31 17:45:57.812: INFO: Waiting up to 5m0s for pod "pod-secrets-bdc79a42-f91c-432f-8df8-e22168c58b8c" in namespace "secrets-9058" to be "Succeeded or Failed"
May 31 17:45:57.816: INFO: Pod "pod-secrets-bdc79a42-f91c-432f-8df8-e22168c58b8c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.250582ms
May 31 17:45:59.824: INFO: Pod "pod-secrets-bdc79a42-f91c-432f-8df8-e22168c58b8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012086374s
May 31 17:46:01.835: INFO: Pod "pod-secrets-bdc79a42-f91c-432f-8df8-e22168c58b8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023694485s
STEP: Saw pod success
May 31 17:46:01.836: INFO: Pod "pod-secrets-bdc79a42-f91c-432f-8df8-e22168c58b8c" satisfied condition "Succeeded or Failed"
May 31 17:46:01.843: INFO: Trying to get logs from node kind-worker pod pod-secrets-bdc79a42-f91c-432f-8df8-e22168c58b8c container secret-volume-test: <nil>
STEP: delete the pod
May 31 17:46:01.880: INFO: Waiting for pod pod-secrets-bdc79a42-f91c-432f-8df8-e22168c58b8c to disappear
May 31 17:46:01.884: INFO: Pod pod-secrets-bdc79a42-f91c-432f-8df8-e22168c58b8c no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
May 31 17:46:01.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9058" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":292,"completed":17,"skipped":352,"failed":0}
SSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 29 lines ...
May 31 17:46:28.362: INFO: >>> kubeConfig: /root/.kube/kind-test-config
May 31 17:46:28.607: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
May 31 17:46:28.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6005" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":18,"skipped":356,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 8 lines ...
May 31 17:46:28.670: INFO: >>> kubeConfig: /root/.kube/kind-test-config
May 31 17:46:32.571: INFO: >>> kubeConfig: /root/.kube/kind-test-config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
May 31 17:46:47.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1352" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":292,"completed":19,"skipped":400,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 31 17:46:47.469: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
May 31 17:48:47.618: INFO: Deleting pod "var-expansion-0c00ff59-6fa9-4d91-bf4d-2a33c5301c11" in namespace "var-expansion-7888"
May 31 17:48:47.628: INFO: Wait up to 5m0s for pod "var-expansion-0c00ff59-6fa9-4d91-bf4d-2a33c5301c11" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
May 31 17:48:57.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7888" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":292,"completed":20,"skipped":411,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-map-1e26202d-5408-48ed-bd92-f6cd6fc3e250
STEP: Creating a pod to test consume secrets
May 31 17:48:57.744: INFO: Waiting up to 5m0s for pod "pod-secrets-43ff1991-880a-4dd2-ba6b-f7ec64b3769a" in namespace "secrets-334" to be "Succeeded or Failed"
May 31 17:48:57.748: INFO: Pod "pod-secrets-43ff1991-880a-4dd2-ba6b-f7ec64b3769a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137328ms
May 31 17:48:59.754: INFO: Pod "pod-secrets-43ff1991-880a-4dd2-ba6b-f7ec64b3769a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010105522s
May 31 17:49:01.764: INFO: Pod "pod-secrets-43ff1991-880a-4dd2-ba6b-f7ec64b3769a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019732395s
STEP: Saw pod success
May 31 17:49:01.764: INFO: Pod "pod-secrets-43ff1991-880a-4dd2-ba6b-f7ec64b3769a" satisfied condition "Succeeded or Failed"
May 31 17:49:01.773: INFO: Trying to get logs from node kind-worker2 pod pod-secrets-43ff1991-880a-4dd2-ba6b-f7ec64b3769a container secret-volume-test: <nil>
STEP: delete the pod
May 31 17:49:01.812: INFO: Waiting for pod pod-secrets-43ff1991-880a-4dd2-ba6b-f7ec64b3769a to disappear
May 31 17:49:01.815: INFO: Pod pod-secrets-43ff1991-880a-4dd2-ba6b-f7ec64b3769a no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
May 31 17:49:01.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-334" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":21,"skipped":430,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 31 17:49:01.886: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b75db235-10e7-4dc4-9544-7aa886cf1f67" in namespace "downward-api-4024" to be "Succeeded or Failed"
May 31 17:49:01.888: INFO: Pod "downwardapi-volume-b75db235-10e7-4dc4-9544-7aa886cf1f67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.433035ms
May 31 17:49:03.894: INFO: Pod "downwardapi-volume-b75db235-10e7-4dc4-9544-7aa886cf1f67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008440586s
May 31 17:49:05.902: INFO: Pod "downwardapi-volume-b75db235-10e7-4dc4-9544-7aa886cf1f67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016170065s
STEP: Saw pod success
May 31 17:49:05.902: INFO: Pod "downwardapi-volume-b75db235-10e7-4dc4-9544-7aa886cf1f67" satisfied condition "Succeeded or Failed"
May 31 17:49:05.912: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-b75db235-10e7-4dc4-9544-7aa886cf1f67 container client-container: <nil>
STEP: delete the pod
May 31 17:49:05.960: INFO: Waiting for pod downwardapi-volume-b75db235-10e7-4dc4-9544-7aa886cf1f67 to disappear
May 31 17:49:05.972: INFO: Pod downwardapi-volume-b75db235-10e7-4dc4-9544-7aa886cf1f67 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
May 31 17:49:05.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4024" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":292,"completed":22,"skipped":440,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
May 31 17:49:06.067: INFO: >>> kubeConfig: /root/.kube/kind-test-config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
May 31 17:49:07.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7176" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":292,"completed":23,"skipped":510,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
May 31 17:49:18.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8919" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":292,"completed":24,"skipped":519,"failed":0}
S
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
May 31 17:49:22.378: INFO: Initial restart count of pod busybox-b2e8e205-d1e4-46a3-af59-f49c3375ad24 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
May 31 17:53:23.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9406" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":292,"completed":25,"skipped":520,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
May 31 17:53:27.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1936" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":26,"skipped":547,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 44 lines ...
May 31 17:53:52.619: INFO: Pod "test-rollover-deployment-7c4fd9c879-wjmbp" is available:
&Pod{ObjectMeta:{test-rollover-deployment-7c4fd9c879-wjmbp test-rollover-deployment-7c4fd9c879- deployment-1088 /api/v1/namespaces/deployment-1088/pods/test-rollover-deployment-7c4fd9c879-wjmbp a06a6591-71bf-44e6-a38e-92a30b8e2cc4 4073 0 2020-05-31 17:53:40 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [{apps/v1 ReplicaSet test-rollover-deployment-7c4fd9c879 37fc5482-5c74-4bc2-88aa-0c369ef7c4fd 0xc0008c3e47 0xc0008c3e48}] []  [{kube-controller-manager Update v1 2020-05-31 17:53:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37fc5482-5c74-4bc2-88aa-0c369ef7c4fd\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-31 17:53:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.16\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r24fv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r24fv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r24fv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 17:53:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 17:53:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 17:53:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 17:53:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.1.16,StartTime:2020-05-31 17:53:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-31 17:53:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://7049572e1b5b559394f2f85832bc09d672c16e7a5adb59361b06fbc13f96b55f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.16,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
May 31 17:53:52.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1088" for this suite.
•{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":292,"completed":27,"skipped":565,"failed":0}
SSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 31 17:53:52.632: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:162
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating the pod
May 31 17:53:52.674: INFO: PodSpec: initContainers in spec.initContainers
May 31 17:54:43.222: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-0518d5c2-303b-4385-87ec-5361fe4b5b1b", GenerateName:"", Namespace:"init-container-3251", SelfLink:"/api/v1/namespaces/init-container-3251/pods/pod-init-0518d5c2-303b-4385-87ec-5361fe4b5b1b", UID:"f4b56312-843e-4aec-a88b-fe384ba5b588", ResourceVersion:"4334", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63726544432, loc:(*time.Location)(0x8006d20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"674047846"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001f60a40), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001f60a60)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001f60aa0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001f60b00)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-j4vg8", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00225b440), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-j4vg8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-j4vg8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-j4vg8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001a5ace8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kind-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0023b21c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001a5ad70)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001a5ad90)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001a5ad98), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001a5ad9c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726544432, loc:(*time.Location)(0x8006d20)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726544432, loc:(*time.Location)(0x8006d20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726544432, loc:(*time.Location)(0x8006d20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726544432, loc:(*time.Location)(0x8006d20)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.2", PodIP:"10.244.2.29", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.29"}}, StartTime:(*v1.Time)(0xc001f60b40), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0023b22a0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0023b2310)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://68202dbadd83aca9a5ad60ec24e49cce60d4390924edef6653cfcf43d4d4cea8", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001f60b80), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001f60b60), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc001a5ae1f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
May 31 17:54:43.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3251" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":292,"completed":28,"skipped":568,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
May 31 17:54:47.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3685" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":292,"completed":29,"skipped":586,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicaSet
... skipping 12 lines ...
May 31 17:54:52.520: INFO: Trying to dial the pod
May 31 17:54:57.532: INFO: Controller my-hostname-basic-ac065025-f405-473c-abe2-94f0401f5ef7: Got expected result from replica 1 [my-hostname-basic-ac065025-f405-473c-abe2-94f0401f5ef7-4jrt5]: "my-hostname-basic-ac065025-f405-473c-abe2-94f0401f5ef7-4jrt5", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  test/e2e/framework/framework.go:175
May 31 17:54:57.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-2351" for this suite.
•{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":292,"completed":30,"skipped":654,"failed":0}
SSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 11 lines ...
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
May 31 17:55:17.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-99" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":292,"completed":31,"skipped":657,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 8 lines ...
May 31 17:55:17.886: INFO: >>> kubeConfig: /root/.kube/kind-test-config
May 31 17:55:21.471: INFO: >>> kubeConfig: /root/.kube/kind-test-config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
May 31 17:55:35.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9222" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":292,"completed":32,"skipped":676,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 40 lines ...
May 31 17:55:45.665: INFO: Deleting pod "simpletest-rc-to-be-deleted-6nrrg" in namespace "gc-2008"
May 31 17:55:45.679: INFO: Deleting pod "simpletest-rc-to-be-deleted-6tcwl" in namespace "gc-2008"
[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
May 31 17:55:45.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2008" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":292,"completed":33,"skipped":689,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
May 31 17:55:45.707: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in container's command
May 31 17:55:45.776: INFO: Waiting up to 5m0s for pod "var-expansion-2bfa9bad-443c-4247-ae1a-ff480886176d" in namespace "var-expansion-5225" to be "Succeeded or Failed"
May 31 17:55:45.782: INFO: Pod "var-expansion-2bfa9bad-443c-4247-ae1a-ff480886176d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.560385ms
May 31 17:55:47.794: INFO: Pod "var-expansion-2bfa9bad-443c-4247-ae1a-ff480886176d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017441197s
May 31 17:55:49.798: INFO: Pod "var-expansion-2bfa9bad-443c-4247-ae1a-ff480886176d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021662504s
STEP: Saw pod success
May 31 17:55:49.798: INFO: Pod "var-expansion-2bfa9bad-443c-4247-ae1a-ff480886176d" satisfied condition "Succeeded or Failed"
May 31 17:55:49.804: INFO: Trying to get logs from node kind-worker2 pod var-expansion-2bfa9bad-443c-4247-ae1a-ff480886176d container dapi-container: <nil>
STEP: delete the pod
May 31 17:55:49.838: INFO: Waiting for pod var-expansion-2bfa9bad-443c-4247-ae1a-ff480886176d to disappear
May 31 17:55:49.841: INFO: Pod var-expansion-2bfa9bad-443c-4247-ae1a-ff480886176d no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
May 31 17:55:49.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5225" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":292,"completed":34,"skipped":700,"failed":0}
SSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 26 lines ...
May 31 17:56:11.186: INFO: >>> kubeConfig: /root/.kube/kind-test-config
May 31 17:56:12.407: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
May 31 17:56:12.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4422" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":35,"skipped":704,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
May 31 17:56:12.429: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
May 31 17:56:12.478: INFO: Waiting up to 5m0s for pod "downward-api-6d7d2f7e-ed1b-4788-8090-a0f70edec33e" in namespace "downward-api-4125" to be "Succeeded or Failed"
May 31 17:56:12.484: INFO: Pod "downward-api-6d7d2f7e-ed1b-4788-8090-a0f70edec33e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.007807ms
May 31 17:56:14.488: INFO: Pod "downward-api-6d7d2f7e-ed1b-4788-8090-a0f70edec33e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009374775s
May 31 17:56:16.492: INFO: Pod "downward-api-6d7d2f7e-ed1b-4788-8090-a0f70edec33e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0137974s
STEP: Saw pod success
May 31 17:56:16.494: INFO: Pod "downward-api-6d7d2f7e-ed1b-4788-8090-a0f70edec33e" satisfied condition "Succeeded or Failed"
May 31 17:56:16.499: INFO: Trying to get logs from node kind-worker2 pod downward-api-6d7d2f7e-ed1b-4788-8090-a0f70edec33e container dapi-container: <nil>
STEP: delete the pod
May 31 17:56:16.520: INFO: Waiting for pod downward-api-6d7d2f7e-ed1b-4788-8090-a0f70edec33e to disappear
May 31 17:56:16.524: INFO: Pod downward-api-6d7d2f7e-ed1b-4788-8090-a0f70edec33e no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
May 31 17:56:16.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4125" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":292,"completed":36,"skipped":734,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 8 lines ...
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
May 31 17:56:23.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4644" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":292,"completed":37,"skipped":766,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Job
... skipping 21 lines ...
May 31 17:56:34.707: INFO: Pod "adopt-release-vb2m6": Phase="Running", Reason="", readiness=true. Elapsed: 2.010976759s
May 31 17:56:34.707: INFO: Pod "adopt-release-vb2m6" satisfied condition "released"
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:175
May 31 17:56:34.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-7" for this suite.
•{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":292,"completed":38,"skipped":775,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
May 31 17:56:34.720: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir volume type on node default medium
May 31 17:56:34.764: INFO: Waiting up to 5m0s for pod "pod-91173ade-3bb5-4f6a-81ff-2e557e62ec6f" in namespace "emptydir-4908" to be "Succeeded or Failed"
May 31 17:56:34.767: INFO: Pod "pod-91173ade-3bb5-4f6a-81ff-2e557e62ec6f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.06495ms
May 31 17:56:36.772: INFO: Pod "pod-91173ade-3bb5-4f6a-81ff-2e557e62ec6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007933997s
May 31 17:56:38.777: INFO: Pod "pod-91173ade-3bb5-4f6a-81ff-2e557e62ec6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01317191s
STEP: Saw pod success
May 31 17:56:38.778: INFO: Pod "pod-91173ade-3bb5-4f6a-81ff-2e557e62ec6f" satisfied condition "Succeeded or Failed"
May 31 17:56:38.780: INFO: Trying to get logs from node kind-worker2 pod pod-91173ade-3bb5-4f6a-81ff-2e557e62ec6f container test-container: <nil>
STEP: delete the pod
May 31 17:56:38.810: INFO: Waiting for pod pod-91173ade-3bb5-4f6a-81ff-2e557e62ec6f to disappear
May 31 17:56:38.814: INFO: Pod pod-91173ade-3bb5-4f6a-81ff-2e557e62ec6f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
May 31 17:56:38.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4908" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":39,"skipped":795,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 26 lines ...
May 31 17:56:42.268: INFO: Selector matched 1 pods for map[app:agnhost]
May 31 17:56:42.268: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
May 31 17:56:42.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4995" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":292,"completed":40,"skipped":818,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 31 17:56:42.291: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap that has name configmap-test-emptyKey-3d8caca3-8746-436d-9843-8fb7a189cc98
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:175
May 31 17:56:42.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1445" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":292,"completed":41,"skipped":882,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 31 17:56:42.342: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
May 31 17:58:42.414: INFO: Deleting pod "var-expansion-1eaf621d-ada1-4f0a-b389-acce339ebe10" in namespace "var-expansion-1224"
May 31 17:58:42.423: INFO: Wait up to 5m0s for pod "var-expansion-1eaf621d-ada1-4f0a-b389-acce339ebe10" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
May 31 17:58:48.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1224" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":292,"completed":42,"skipped":969,"failed":0}
SSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 19 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
May 31 17:59:04.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7959" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":292,"completed":43,"skipped":973,"failed":0}
SSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
May 31 17:59:11.570: INFO: stderr: ""
May 31 17:59:11.570: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7804-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t<string>\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t<string>\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t<Object>\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t<map[string]>\n     Specification of Waldo\n\n   status\t<Object>\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
May 31 17:59:13.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3570" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":292,"completed":44,"skipped":976,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 19 lines ...
May 31 17:59:16.588: INFO: Deleting pod "var-expansion-b1348313-f218-4b2d-bbf2-9936757b8dd1" in namespace "var-expansion-4273"
May 31 17:59:16.600: INFO: Wait up to 5m0s for pod "var-expansion-b1348313-f218-4b2d-bbf2-9936757b8dd1" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
May 31 17:59:58.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4273" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":292,"completed":45,"skipped":996,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  test/e2e/framework/framework.go:175
May 31 18:00:05.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3655" for this suite.
STEP: Destroying namespace "webhook-3655-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":292,"completed":46,"skipped":1001,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 24 lines ...
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
May 31 18:00:15.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-2715" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/crd_conversion_webhook.go:137
•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":292,"completed":47,"skipped":1007,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
  test/e2e/framework/framework.go:175
May 31 18:00:21.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1802" for this suite.
STEP: Destroying namespace "webhook-1802-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":292,"completed":48,"skipped":1036,"failed":0}
SSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 21 lines ...
May 31 18:00:37.791: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May 31 18:00:37.798: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
May 31 18:00:37.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7935" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":292,"completed":49,"skipped":1040,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 26 lines ...
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May 31 18:00:55.968: INFO: File wheezy_udp@dns-test-service-3.dns-2917.svc.cluster.local from pod  dns-2917/dns-test-0ca38741-fd2f-4bbc-b4d1-ecbaeca2bb26 contains 'foo.example.com.
' instead of 'bar.example.com.'
May 31 18:00:55.974: INFO: File jessie_udp@dns-test-service-3.dns-2917.svc.cluster.local from pod  dns-2917/dns-test-0ca38741-fd2f-4bbc-b4d1-ecbaeca2bb26 contains 'foo.example.com.
' instead of 'bar.example.com.'
May 31 18:00:55.974: INFO: Lookups using dns-2917/dns-test-0ca38741-fd2f-4bbc-b4d1-ecbaeca2bb26 failed for: [wheezy_udp@dns-test-service-3.dns-2917.svc.cluster.local jessie_udp@dns-test-service-3.dns-2917.svc.cluster.local]

May 31 18:01:00.979: INFO: File wheezy_udp@dns-test-service-3.dns-2917.svc.cluster.local from pod  dns-2917/dns-test-0ca38741-fd2f-4bbc-b4d1-ecbaeca2bb26 contains 'foo.example.com.
' instead of 'bar.example.com.'
May 31 18:01:00.984: INFO: File jessie_udp@dns-test-service-3.dns-2917.svc.cluster.local from pod  dns-2917/dns-test-0ca38741-fd2f-4bbc-b4d1-ecbaeca2bb26 contains 'foo.example.com.
' instead of 'bar.example.com.'
May 31 18:01:00.984: INFO: Lookups using dns-2917/dns-test-0ca38741-fd2f-4bbc-b4d1-ecbaeca2bb26 failed for: [wheezy_udp@dns-test-service-3.dns-2917.svc.cluster.local jessie_udp@dns-test-service-3.dns-2917.svc.cluster.local]

May 31 18:01:05.982: INFO: File wheezy_udp@dns-test-service-3.dns-2917.svc.cluster.local from pod  dns-2917/dns-test-0ca38741-fd2f-4bbc-b4d1-ecbaeca2bb26 contains '' instead of 'bar.example.com.'
May 31 18:01:05.990: INFO: File jessie_udp@dns-test-service-3.dns-2917.svc.cluster.local from pod  dns-2917/dns-test-0ca38741-fd2f-4bbc-b4d1-ecbaeca2bb26 contains 'foo.example.com.
' instead of 'bar.example.com.'
May 31 18:01:05.991: INFO: Lookups using dns-2917/dns-test-0ca38741-fd2f-4bbc-b4d1-ecbaeca2bb26 failed for: [wheezy_udp@dns-test-service-3.dns-2917.svc.cluster.local jessie_udp@dns-test-service-3.dns-2917.svc.cluster.local]

May 31 18:01:10.995: INFO: DNS probes using dns-test-0ca38741-fd2f-4bbc-b4d1-ecbaeca2bb26 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2917.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-2917.svc.cluster.local; sleep 1; done
... skipping 9 lines ...
STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
May 31 18:01:15.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2917" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":292,"completed":50,"skipped":1061,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 12 lines ...
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
May 31 18:01:15.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2056" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":292,"completed":51,"skipped":1096,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-23f68c1b-3866-4d38-a36f-876381c0d47f
STEP: Creating a pod to test consume secrets
May 31 18:01:15.323: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e39db694-bf39-4ffc-8b7e-eb56d27922cb" in namespace "projected-7494" to be "Succeeded or Failed"
May 31 18:01:15.327: INFO: Pod "pod-projected-secrets-e39db694-bf39-4ffc-8b7e-eb56d27922cb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.633619ms
May 31 18:01:17.334: INFO: Pod "pod-projected-secrets-e39db694-bf39-4ffc-8b7e-eb56d27922cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010674944s
May 31 18:01:19.339: INFO: Pod "pod-projected-secrets-e39db694-bf39-4ffc-8b7e-eb56d27922cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015569306s
STEP: Saw pod success
May 31 18:01:19.339: INFO: Pod "pod-projected-secrets-e39db694-bf39-4ffc-8b7e-eb56d27922cb" satisfied condition "Succeeded or Failed"
May 31 18:01:19.343: INFO: Trying to get logs from node kind-worker2 pod pod-projected-secrets-e39db694-bf39-4ffc-8b7e-eb56d27922cb container projected-secret-volume-test: <nil>
STEP: delete the pod
May 31 18:01:19.383: INFO: Waiting for pod pod-projected-secrets-e39db694-bf39-4ffc-8b7e-eb56d27922cb to disappear
May 31 18:01:19.387: INFO: Pod pod-projected-secrets-e39db694-bf39-4ffc-8b7e-eb56d27922cb no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
May 31 18:01:19.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7494" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":52,"skipped":1117,"failed":0}

------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 15 lines ...
  test/e2e/framework/framework.go:175
May 31 18:01:25.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-2316" for this suite.
STEP: Destroying namespace "nsdeletetest-1138" for this suite.
May 31 18:01:25.579: INFO: Namespace nsdeletetest-1138 was already deleted
STEP: Destroying namespace "nsdeletetest-4877" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":292,"completed":53,"skipped":1117,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 67 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
May 31 18:02:06.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7474" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":292,"completed":54,"skipped":1150,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 20 lines ...
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
May 31 18:02:31.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9432" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":292,"completed":55,"skipped":1157,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 31 18:02:31.407: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:162
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating the pod
May 31 18:02:31.447: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
May 31 18:02:38.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5017" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":292,"completed":56,"skipped":1171,"failed":0}

------------------------------
[sig-scheduling] LimitRange 
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] LimitRange
... skipping 31 lines ...
May 31 18:02:46.012: INFO: limitRange is already deleted
STEP: Creating a Pod with more than former max resources
[AfterEach] [sig-scheduling] LimitRange
  test/e2e/framework/framework.go:175
May 31 18:02:46.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "limitrange-7922" for this suite.
•{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":292,"completed":57,"skipped":1171,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 27 lines ...
  test/e2e/framework/framework.go:175
May 31 18:02:53.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7278" for this suite.
STEP: Destroying namespace "webhook-7278-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":292,"completed":58,"skipped":1195,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
May 31 18:02:53.656: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test override command
May 31 18:02:53.730: INFO: Waiting up to 5m0s for pod "client-containers-a50382d2-969e-4a89-822b-10d25cb279c7" in namespace "containers-4798" to be "Succeeded or Failed"
May 31 18:02:53.744: INFO: Pod "client-containers-a50382d2-969e-4a89-822b-10d25cb279c7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.056212ms
May 31 18:02:55.748: INFO: Pod "client-containers-a50382d2-969e-4a89-822b-10d25cb279c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018005257s
May 31 18:02:57.755: INFO: Pod "client-containers-a50382d2-969e-4a89-822b-10d25cb279c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02439878s
STEP: Saw pod success
May 31 18:02:57.755: INFO: Pod "client-containers-a50382d2-969e-4a89-822b-10d25cb279c7" satisfied condition "Succeeded or Failed"
May 31 18:02:57.760: INFO: Trying to get logs from node kind-worker2 pod client-containers-a50382d2-969e-4a89-822b-10d25cb279c7 container test-container: <nil>
STEP: delete the pod
May 31 18:02:57.801: INFO: Waiting for pod client-containers-a50382d2-969e-4a89-822b-10d25cb279c7 to disappear
May 31 18:02:57.806: INFO: Pod client-containers-a50382d2-969e-4a89-822b-10d25cb279c7 no longer exists
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
May 31 18:02:57.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4798" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":292,"completed":59,"skipped":1213,"failed":0}

------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 22 lines ...
May 31 18:03:07.926: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-5442 /api/v1/namespaces/watch-5442/configmaps/e2e-watch-test-label-changed a0189f35-b228-4658-bbee-44ef82e31999 7422 0 2020-05-31 18:02:57 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-05-31 18:03:07 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
May 31 18:03:07.926: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-5442 /api/v1/namespaces/watch-5442/configmaps/e2e-watch-test-label-changed a0189f35-b228-4658-bbee-44ef82e31999 7423 0 2020-05-31 18:02:57 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-05-31 18:03:07 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
May 31 18:03:07.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5442" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":292,"completed":60,"skipped":1213,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name projected-secret-test-7dcc79e3-0ce7-47bc-ab4f-9c709817d23a
STEP: Creating a pod to test consume secrets
May 31 18:03:08.004: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-52c20537-8724-40a4-91ba-d27cf07c80f0" in namespace "projected-5992" to be "Succeeded or Failed"
May 31 18:03:08.011: INFO: Pod "pod-projected-secrets-52c20537-8724-40a4-91ba-d27cf07c80f0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.854831ms
May 31 18:03:10.019: INFO: Pod "pod-projected-secrets-52c20537-8724-40a4-91ba-d27cf07c80f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015104012s
May 31 18:03:12.024: INFO: Pod "pod-projected-secrets-52c20537-8724-40a4-91ba-d27cf07c80f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020499584s
STEP: Saw pod success
May 31 18:03:12.024: INFO: Pod "pod-projected-secrets-52c20537-8724-40a4-91ba-d27cf07c80f0" satisfied condition "Succeeded or Failed"
May 31 18:03:12.028: INFO: Trying to get logs from node kind-worker2 pod pod-projected-secrets-52c20537-8724-40a4-91ba-d27cf07c80f0 container secret-volume-test: <nil>
STEP: delete the pod
May 31 18:03:12.046: INFO: Waiting for pod pod-projected-secrets-52c20537-8724-40a4-91ba-d27cf07c80f0 to disappear
May 31 18:03:12.050: INFO: Pod pod-projected-secrets-52c20537-8724-40a4-91ba-d27cf07c80f0 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
May 31 18:03:12.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5992" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":292,"completed":61,"skipped":1224,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-a9caf8e5-b6d4-4032-96e2-d6014f8ba7db
STEP: Creating a pod to test consume configMaps
May 31 18:03:12.098: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-946c8377-44c4-4fb4-ac3e-bd2426a7b466" in namespace "projected-5147" to be "Succeeded or Failed"
May 31 18:03:12.100: INFO: Pod "pod-projected-configmaps-946c8377-44c4-4fb4-ac3e-bd2426a7b466": Phase="Pending", Reason="", readiness=false. Elapsed: 2.516908ms
May 31 18:03:14.107: INFO: Pod "pod-projected-configmaps-946c8377-44c4-4fb4-ac3e-bd2426a7b466": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00899413s
STEP: Saw pod success
May 31 18:03:14.107: INFO: Pod "pod-projected-configmaps-946c8377-44c4-4fb4-ac3e-bd2426a7b466" satisfied condition "Succeeded or Failed"
May 31 18:03:14.111: INFO: Trying to get logs from node kind-worker2 pod pod-projected-configmaps-946c8377-44c4-4fb4-ac3e-bd2426a7b466 container projected-configmap-volume-test: <nil>
STEP: delete the pod
May 31 18:03:14.135: INFO: Waiting for pod pod-projected-configmaps-946c8377-44c4-4fb4-ac3e-bd2426a7b466 to disappear
May 31 18:03:14.139: INFO: Pod pod-projected-configmaps-946c8377-44c4-4fb4-ac3e-bd2426a7b466 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
May 31 18:03:14.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5147" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":292,"completed":62,"skipped":1239,"failed":0}
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 31 18:03:14.187: INFO: Waiting up to 5m0s for pod "downwardapi-volume-489471b5-ced6-40e9-a6ba-42570625ffcc" in namespace "downward-api-5489" to be "Succeeded or Failed"
May 31 18:03:14.191: INFO: Pod "downwardapi-volume-489471b5-ced6-40e9-a6ba-42570625ffcc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.313528ms
May 31 18:03:16.199: INFO: Pod "downwardapi-volume-489471b5-ced6-40e9-a6ba-42570625ffcc": Phase="Running", Reason="", readiness=true. Elapsed: 2.011973044s
May 31 18:03:18.207: INFO: Pod "downwardapi-volume-489471b5-ced6-40e9-a6ba-42570625ffcc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01979284s
STEP: Saw pod success
May 31 18:03:18.207: INFO: Pod "downwardapi-volume-489471b5-ced6-40e9-a6ba-42570625ffcc" satisfied condition "Succeeded or Failed"
May 31 18:03:18.214: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-489471b5-ced6-40e9-a6ba-42570625ffcc container client-container: <nil>
STEP: delete the pod
May 31 18:03:18.260: INFO: Waiting for pod downwardapi-volume-489471b5-ced6-40e9-a6ba-42570625ffcc to disappear
May 31 18:03:18.267: INFO: Pod downwardapi-volume-489471b5-ced6-40e9-a6ba-42570625ffcc no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
May 31 18:03:18.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5489" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":292,"completed":63,"skipped":1242,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
... skipping 42 lines ...
May 31 18:03:28.379: INFO: >>> kubeConfig: /root/.kube/kind-test-config
May 31 18:03:28.574: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  test/e2e/framework/framework.go:175
May 31 18:03:28.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-78" for this suite.
•{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":64,"skipped":1252,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
May 31 18:03:31.688: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
May 31 18:03:31.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9655" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":292,"completed":65,"skipped":1274,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-0e54255b-3215-49d2-946a-9cb11d4b93e2
STEP: Creating a pod to test consume secrets
May 31 18:03:31.800: INFO: Waiting up to 5m0s for pod "pod-secrets-a1d2f076-8083-4f26-9731-c8f309de18be" in namespace "secrets-3921" to be "Succeeded or Failed"
May 31 18:03:31.806: INFO: Pod "pod-secrets-a1d2f076-8083-4f26-9731-c8f309de18be": Phase="Pending", Reason="", readiness=false. Elapsed: 6.466612ms
May 31 18:03:33.812: INFO: Pod "pod-secrets-a1d2f076-8083-4f26-9731-c8f309de18be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011763737s
May 31 18:03:35.823: INFO: Pod "pod-secrets-a1d2f076-8083-4f26-9731-c8f309de18be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022953726s
STEP: Saw pod success
May 31 18:03:35.823: INFO: Pod "pod-secrets-a1d2f076-8083-4f26-9731-c8f309de18be" satisfied condition "Succeeded or Failed"
May 31 18:03:35.835: INFO: Trying to get logs from node kind-worker pod pod-secrets-a1d2f076-8083-4f26-9731-c8f309de18be container secret-volume-test: <nil>
STEP: delete the pod
May 31 18:03:35.871: INFO: Waiting for pod pod-secrets-a1d2f076-8083-4f26-9731-c8f309de18be to disappear
May 31 18:03:35.875: INFO: Pod pod-secrets-a1d2f076-8083-4f26-9731-c8f309de18be no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
May 31 18:03:35.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3921" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":66,"skipped":1281,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 47 lines ...
• [SLOW TEST:310.209 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":292,"completed":67,"skipped":1292,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 17 lines ...
May 31 18:08:46.167: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-8658 /api/v1/namespaces/watch-8658/configmaps/e2e-watch-test-watch-closed 19725f2a-e96b-4e39-b462-c30a37606e7d 8630 0 2020-05-31 18:08:46 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-05-31 18:08:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
May 31 18:08:46.167: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-8658 /api/v1/namespaces/watch-8658/configmaps/e2e-watch-test-watch-closed 19725f2a-e96b-4e39-b462-c30a37606e7d 8631 0 2020-05-31 18:08:46 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-05-31 18:08:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
May 31 18:08:46.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8658" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":292,"completed":68,"skipped":1325,"failed":0}
SSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 29 lines ...
May 31 18:09:12.439: INFO: >>> kubeConfig: /root/.kube/kind-test-config
May 31 18:09:12.636: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
May 31 18:09:12.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5366" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":292,"completed":69,"skipped":1332,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 13 lines ...
May 31 18:09:40.788: INFO: Restart count of pod container-probe-2966/liveness-95f71453-4072-49fc-8a74-086530ae29a1 is now 1 (24.091800981s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
May 31 18:09:40.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2966" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":292,"completed":70,"skipped":1341,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-0d97ca73-2000-494c-ab06-e1483c30bc25
STEP: Creating a pod to test consume configMaps
May 31 18:09:40.852: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-22228b95-efb2-4175-ad83-7ab8cf291ea9" in namespace "projected-6951" to be "Succeeded or Failed"
May 31 18:09:40.855: INFO: Pod "pod-projected-configmaps-22228b95-efb2-4175-ad83-7ab8cf291ea9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.07586ms
May 31 18:09:42.864: INFO: Pod "pod-projected-configmaps-22228b95-efb2-4175-ad83-7ab8cf291ea9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011747801s
STEP: Saw pod success
May 31 18:09:42.864: INFO: Pod "pod-projected-configmaps-22228b95-efb2-4175-ad83-7ab8cf291ea9" satisfied condition "Succeeded or Failed"
May 31 18:09:42.873: INFO: Trying to get logs from node kind-worker2 pod pod-projected-configmaps-22228b95-efb2-4175-ad83-7ab8cf291ea9 container projected-configmap-volume-test: <nil>
STEP: delete the pod
May 31 18:09:42.915: INFO: Waiting for pod pod-projected-configmaps-22228b95-efb2-4175-ad83-7ab8cf291ea9 to disappear
May 31 18:09:42.924: INFO: Pod pod-projected-configmaps-22228b95-efb2-4175-ad83-7ab8cf291ea9 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
May 31 18:09:42.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6951" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":292,"completed":71,"skipped":1392,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 25 lines ...
May 31 18:09:45.107: INFO: Pod "test-recreate-deployment-d5667d9c7-w8fts" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-w8fts test-recreate-deployment-d5667d9c7- deployment-3090 /api/v1/namespaces/deployment-3090/pods/test-recreate-deployment-d5667d9c7-w8fts 0d9720f0-f634-4373-ab03-153cc8ed0170 8971 0 2020-05-31 18:09:45 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 2e94ad43-1d72-454b-a11e-d2d7d2d9d13b 0xc00217d1f0 0xc00217d1f1}] []  [{kube-controller-manager Update v1 2020-05-31 18:09:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2e94ad43-1d72-454b-a11e-d2d7d2d9d13b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-31 18:09:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gtxsf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gtxsf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gtxsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 18:09:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 18:09:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 18:09:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 18:09:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-05-31 18:09:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
May 31 18:09:45.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3090" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":292,"completed":72,"skipped":1414,"failed":0}
SSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
May 31 18:09:45.116: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
May 31 18:09:45.155: INFO: Waiting up to 5m0s for pod "downward-api-d15e117a-3515-47bd-83b3-5cedf3db6bf0" in namespace "downward-api-5444" to be "Succeeded or Failed"
May 31 18:09:45.159: INFO: Pod "downward-api-d15e117a-3515-47bd-83b3-5cedf3db6bf0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.621528ms
May 31 18:09:47.170: INFO: Pod "downward-api-d15e117a-3515-47bd-83b3-5cedf3db6bf0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014660584s
May 31 18:09:49.175: INFO: Pod "downward-api-d15e117a-3515-47bd-83b3-5cedf3db6bf0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020079787s
STEP: Saw pod success
May 31 18:09:49.175: INFO: Pod "downward-api-d15e117a-3515-47bd-83b3-5cedf3db6bf0" satisfied condition "Succeeded or Failed"
May 31 18:09:49.178: INFO: Trying to get logs from node kind-worker2 pod downward-api-d15e117a-3515-47bd-83b3-5cedf3db6bf0 container dapi-container: <nil>
STEP: delete the pod
May 31 18:09:49.199: INFO: Waiting for pod downward-api-d15e117a-3515-47bd-83b3-5cedf3db6bf0 to disappear
May 31 18:09:49.203: INFO: Pod downward-api-d15e117a-3515-47bd-83b3-5cedf3db6bf0 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
May 31 18:09:49.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5444" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":292,"completed":73,"skipped":1419,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 127 lines ...
May 31 18:10:26.467: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3125/pods","resourceVersion":"9252"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
May 31 18:10:26.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3125" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":292,"completed":74,"skipped":1434,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 31 18:10:26.531: INFO: Waiting up to 5m0s for pod "downwardapi-volume-873960c5-1890-447a-9a74-48587afb599b" in namespace "downward-api-8482" to be "Succeeded or Failed"
May 31 18:10:26.534: INFO: Pod "downwardapi-volume-873960c5-1890-447a-9a74-48587afb599b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.976971ms
May 31 18:10:28.539: INFO: Pod "downwardapi-volume-873960c5-1890-447a-9a74-48587afb599b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007457056s
May 31 18:10:30.554: INFO: Pod "downwardapi-volume-873960c5-1890-447a-9a74-48587afb599b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023145123s
STEP: Saw pod success
May 31 18:10:30.555: INFO: Pod "downwardapi-volume-873960c5-1890-447a-9a74-48587afb599b" satisfied condition "Succeeded or Failed"
May 31 18:10:30.563: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-873960c5-1890-447a-9a74-48587afb599b container client-container: <nil>
STEP: delete the pod
May 31 18:10:30.602: INFO: Waiting for pod downwardapi-volume-873960c5-1890-447a-9a74-48587afb599b to disappear
May 31 18:10:30.608: INFO: Pod downwardapi-volume-873960c5-1890-447a-9a74-48587afb599b no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
May 31 18:10:30.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8482" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":292,"completed":75,"skipped":1450,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 39 lines ...
May 31 18:10:41.936: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:42753 --kubeconfig=/root/.kube/kind-test-config explain e2e-test-crd-publish-openapi-7717-crds.spec'
May 31 18:10:42.679: INFO: stderr: ""
May 31 18:10:42.679: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7717-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
May 31 18:10:42.679: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:42753 --kubeconfig=/root/.kube/kind-test-config explain e2e-test-crd-publish-openapi-7717-crds.spec.bars'
May 31 18:10:43.402: INFO: stderr: ""
May 31 18:10:43.402: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7717-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
May 31 18:10:43.403: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:42753 --kubeconfig=/root/.kube/kind-test-config explain e2e-test-crd-publish-openapi-7717-crds.spec.bars2'
May 31 18:10:44.180: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
May 31 18:10:47.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2256" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":292,"completed":76,"skipped":1451,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl server-side dry-run 
  should check if kubectl can dry-run update Pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 22 lines ...
May 31 18:10:56.779: INFO: stderr: ""
May 31 18:10:56.779: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
May 31 18:10:56.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6294" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":292,"completed":77,"skipped":1526,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
May 31 18:11:07.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-60" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":292,"completed":78,"skipped":1541,"failed":0}
SS
------------------------------
[sig-network] Services 
  should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 45 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
May 31 18:11:26.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3929" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":292,"completed":79,"skipped":1543,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
May 31 18:11:26.885: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in container's args
May 31 18:11:26.943: INFO: Waiting up to 5m0s for pod "var-expansion-e494a13d-6f85-4dc1-8909-01c6680bae6a" in namespace "var-expansion-123" to be "Succeeded or Failed"
May 31 18:11:26.951: INFO: Pod "var-expansion-e494a13d-6f85-4dc1-8909-01c6680bae6a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.395691ms
May 31 18:11:28.958: INFO: Pod "var-expansion-e494a13d-6f85-4dc1-8909-01c6680bae6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014445061s
May 31 18:11:30.961: INFO: Pod "var-expansion-e494a13d-6f85-4dc1-8909-01c6680bae6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018168562s
STEP: Saw pod success
May 31 18:11:30.961: INFO: Pod "var-expansion-e494a13d-6f85-4dc1-8909-01c6680bae6a" satisfied condition "Succeeded or Failed"
May 31 18:11:30.966: INFO: Trying to get logs from node kind-worker2 pod var-expansion-e494a13d-6f85-4dc1-8909-01c6680bae6a container dapi-container: <nil>
STEP: delete the pod
May 31 18:11:30.986: INFO: Waiting for pod var-expansion-e494a13d-6f85-4dc1-8909-01c6680bae6a to disappear
May 31 18:11:30.988: INFO: Pod var-expansion-e494a13d-6f85-4dc1-8909-01c6680bae6a no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
May 31 18:11:30.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-123" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":292,"completed":80,"skipped":1552,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
May 31 18:11:35.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3945" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":292,"completed":81,"skipped":1572,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
May 31 18:11:38.224: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
May 31 18:11:38.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9389" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":292,"completed":82,"skipped":1579,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-a193e191-2889-4acc-b167-ec7f24ba03d7
STEP: Creating a pod to test consume secrets
May 31 18:11:38.296: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d48640eb-a0f0-400a-850e-a7d92aa47883" in namespace "projected-8095" to be "Succeeded or Failed"
May 31 18:11:38.303: INFO: Pod "pod-projected-secrets-d48640eb-a0f0-400a-850e-a7d92aa47883": Phase="Pending", Reason="", readiness=false. Elapsed: 7.401736ms
May 31 18:11:40.308: INFO: Pod "pod-projected-secrets-d48640eb-a0f0-400a-850e-a7d92aa47883": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012308106s
May 31 18:11:42.319: INFO: Pod "pod-projected-secrets-d48640eb-a0f0-400a-850e-a7d92aa47883": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023304149s
STEP: Saw pod success
May 31 18:11:42.319: INFO: Pod "pod-projected-secrets-d48640eb-a0f0-400a-850e-a7d92aa47883" satisfied condition "Succeeded or Failed"
May 31 18:11:42.328: INFO: Trying to get logs from node kind-worker2 pod pod-projected-secrets-d48640eb-a0f0-400a-850e-a7d92aa47883 container projected-secret-volume-test: <nil>
STEP: delete the pod
May 31 18:11:42.351: INFO: Waiting for pod pod-projected-secrets-d48640eb-a0f0-400a-850e-a7d92aa47883 to disappear
May 31 18:11:42.355: INFO: Pod pod-projected-secrets-d48640eb-a0f0-400a-850e-a7d92aa47883 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
May 31 18:11:42.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8095" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":83,"skipped":1591,"failed":0}
SSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 22 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
May 31 18:11:51.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3430" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":292,"completed":84,"skipped":1599,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] 
  removing taint cancels eviction [Disruptive] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial]
... skipping 20 lines ...
STEP: Waiting some time to make sure that toleration time passed.
May 31 18:14:06.863: INFO: Pod wasn't evicted. Test successful
[AfterEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial]
  test/e2e/framework/framework.go:175
May 31 18:14:06.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "taint-single-pod-7812" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]","total":292,"completed":85,"skipped":1617,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-downwardapi-jmc2
STEP: Creating a pod to test atomic-volume-subpath
May 31 18:14:06.943: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-jmc2" in namespace "subpath-2331" to be "Succeeded or Failed"
May 31 18:14:06.946: INFO: Pod "pod-subpath-test-downwardapi-jmc2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.184348ms
May 31 18:14:08.952: INFO: Pod "pod-subpath-test-downwardapi-jmc2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008761945s
May 31 18:14:10.958: INFO: Pod "pod-subpath-test-downwardapi-jmc2": Phase="Running", Reason="", readiness=true. Elapsed: 4.015130842s
May 31 18:14:12.963: INFO: Pod "pod-subpath-test-downwardapi-jmc2": Phase="Running", Reason="", readiness=true. Elapsed: 6.019328481s
May 31 18:14:14.967: INFO: Pod "pod-subpath-test-downwardapi-jmc2": Phase="Running", Reason="", readiness=true. Elapsed: 8.023687768s
May 31 18:14:16.971: INFO: Pod "pod-subpath-test-downwardapi-jmc2": Phase="Running", Reason="", readiness=true. Elapsed: 10.0282351s
... skipping 2 lines ...
May 31 18:14:22.998: INFO: Pod "pod-subpath-test-downwardapi-jmc2": Phase="Running", Reason="", readiness=true. Elapsed: 16.055259052s
May 31 18:14:25.004: INFO: Pod "pod-subpath-test-downwardapi-jmc2": Phase="Running", Reason="", readiness=true. Elapsed: 18.060340069s
May 31 18:14:27.014: INFO: Pod "pod-subpath-test-downwardapi-jmc2": Phase="Running", Reason="", readiness=true. Elapsed: 20.070402097s
May 31 18:14:29.024: INFO: Pod "pod-subpath-test-downwardapi-jmc2": Phase="Running", Reason="", readiness=true. Elapsed: 22.080313676s
May 31 18:14:31.031: INFO: Pod "pod-subpath-test-downwardapi-jmc2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.087478702s
STEP: Saw pod success
May 31 18:14:31.031: INFO: Pod "pod-subpath-test-downwardapi-jmc2" satisfied condition "Succeeded or Failed"
May 31 18:14:31.036: INFO: Trying to get logs from node kind-worker pod pod-subpath-test-downwardapi-jmc2 container test-container-subpath-downwardapi-jmc2: <nil>
STEP: delete the pod
May 31 18:14:31.067: INFO: Waiting for pod pod-subpath-test-downwardapi-jmc2 to disappear
May 31 18:14:31.070: INFO: Pod pod-subpath-test-downwardapi-jmc2 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-jmc2
May 31 18:14:31.070: INFO: Deleting pod "pod-subpath-test-downwardapi-jmc2" in namespace "subpath-2331"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
May 31 18:14:31.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2331" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":292,"completed":86,"skipped":1640,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 26 lines ...
  test/e2e/framework/framework.go:175
May 31 18:14:36.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6898" for this suite.
STEP: Destroying namespace "webhook-6898-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":292,"completed":87,"skipped":1642,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 47 lines ...
May 31 18:14:41.611: INFO: stderr: ""
May 31 18:14:41.611: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
May 31 18:14:41.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4665" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":292,"completed":88,"skipped":1644,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
May 31 18:14:57.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5539" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":292,"completed":89,"skipped":1650,"failed":0}
SSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
May 31 18:15:01.891: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:01.895: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:01.950: INFO: Unable to read jessie_udp@dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:01.954: INFO: Unable to read jessie_tcp@dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:01.959: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:01.963: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:01.987: INFO: Lookups using dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba failed for: [wheezy_udp@dns-test-service.dns-9098.svc.cluster.local wheezy_tcp@dns-test-service.dns-9098.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local jessie_udp@dns-test-service.dns-9098.svc.cluster.local jessie_tcp@dns-test-service.dns-9098.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local]

May 31 18:15:06.997: INFO: Unable to read wheezy_udp@dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:07.004: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:07.011: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:07.018: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:07.056: INFO: Unable to read jessie_udp@dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:07.060: INFO: Unable to read jessie_tcp@dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:07.064: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:07.068: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:07.090: INFO: Lookups using dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba failed for: [wheezy_udp@dns-test-service.dns-9098.svc.cluster.local wheezy_tcp@dns-test-service.dns-9098.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local jessie_udp@dns-test-service.dns-9098.svc.cluster.local jessie_tcp@dns-test-service.dns-9098.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local]

May 31 18:15:11.996: INFO: Unable to read wheezy_udp@dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:12.000: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:12.004: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:12.008: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:12.034: INFO: Unable to read jessie_udp@dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:12.038: INFO: Unable to read jessie_tcp@dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:12.042: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:12.046: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:12.067: INFO: Lookups using dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba failed for: [wheezy_udp@dns-test-service.dns-9098.svc.cluster.local wheezy_tcp@dns-test-service.dns-9098.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local jessie_udp@dns-test-service.dns-9098.svc.cluster.local jessie_tcp@dns-test-service.dns-9098.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local]

May 31 18:15:16.998: INFO: Unable to read wheezy_udp@dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:17.009: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:17.019: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:17.030: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:17.085: INFO: Unable to read jessie_udp@dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:17.099: INFO: Unable to read jessie_tcp@dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:17.104: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:17.111: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:17.143: INFO: Lookups using dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba failed for: [wheezy_udp@dns-test-service.dns-9098.svc.cluster.local wheezy_tcp@dns-test-service.dns-9098.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local jessie_udp@dns-test-service.dns-9098.svc.cluster.local jessie_tcp@dns-test-service.dns-9098.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local]

May 31 18:15:21.994: INFO: Unable to read wheezy_udp@dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:21.999: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:22.004: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:22.007: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:22.035: INFO: Unable to read jessie_udp@dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:22.038: INFO: Unable to read jessie_tcp@dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:22.042: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:22.051: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:22.076: INFO: Lookups using dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba failed for: [wheezy_udp@dns-test-service.dns-9098.svc.cluster.local wheezy_tcp@dns-test-service.dns-9098.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local jessie_udp@dns-test-service.dns-9098.svc.cluster.local jessie_tcp@dns-test-service.dns-9098.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local]

May 31 18:15:26.998: INFO: Unable to read wheezy_udp@dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:27.008: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:27.016: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:27.023: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:27.054: INFO: Unable to read jessie_udp@dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:27.058: INFO: Unable to read jessie_tcp@dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:27.062: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:27.066: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local from pod dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba: the server could not find the requested resource (get pods dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba)
May 31 18:15:27.090: INFO: Lookups using dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba failed for: [wheezy_udp@dns-test-service.dns-9098.svc.cluster.local wheezy_tcp@dns-test-service.dns-9098.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local jessie_udp@dns-test-service.dns-9098.svc.cluster.local jessie_tcp@dns-test-service.dns-9098.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9098.svc.cluster.local]

May 31 18:15:32.084: INFO: DNS probes using dns-9098/dns-test-7734a0f2-fd61-4786-a9a8-460eb5ac90ba succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
May 31 18:15:32.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9098" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":292,"completed":90,"skipped":1655,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  test/e2e/framework/framework.go:175
May 31 18:15:38.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1751" for this suite.
STEP: Destroying namespace "webhook-1751-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":292,"completed":91,"skipped":1668,"failed":0}
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-925db1dd-0701-449d-8c7c-2b6dfa4e688a
STEP: Creating a pod to test consume configMaps
May 31 18:15:38.236: INFO: Waiting up to 5m0s for pod "pod-configmaps-b5a0941e-8ed2-4af5-8399-b52b41df22da" in namespace "configmap-864" to be "Succeeded or Failed"
May 31 18:15:38.243: INFO: Pod "pod-configmaps-b5a0941e-8ed2-4af5-8399-b52b41df22da": Phase="Pending", Reason="", readiness=false. Elapsed: 6.130621ms
May 31 18:15:40.248: INFO: Pod "pod-configmaps-b5a0941e-8ed2-4af5-8399-b52b41df22da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012019839s
May 31 18:15:42.260: INFO: Pod "pod-configmaps-b5a0941e-8ed2-4af5-8399-b52b41df22da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023195353s
STEP: Saw pod success
May 31 18:15:42.260: INFO: Pod "pod-configmaps-b5a0941e-8ed2-4af5-8399-b52b41df22da" satisfied condition "Succeeded or Failed"
May 31 18:15:42.266: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-b5a0941e-8ed2-4af5-8399-b52b41df22da container configmap-volume-test: <nil>
STEP: delete the pod
May 31 18:15:42.292: INFO: Waiting for pod pod-configmaps-b5a0941e-8ed2-4af5-8399-b52b41df22da to disappear
May 31 18:15:42.299: INFO: Pod pod-configmaps-b5a0941e-8ed2-4af5-8399-b52b41df22da no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
May 31 18:15:42.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-864" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":292,"completed":92,"skipped":1671,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
May 31 18:15:53.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2157" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":292,"completed":93,"skipped":1692,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
May 31 18:15:53.410: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test override arguments
May 31 18:15:53.451: INFO: Waiting up to 5m0s for pod "client-containers-d53a408f-3082-41a9-8017-950fc74d99f6" in namespace "containers-2786" to be "Succeeded or Failed"
May 31 18:15:53.454: INFO: Pod "client-containers-d53a408f-3082-41a9-8017-950fc74d99f6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.244166ms
May 31 18:15:55.459: INFO: Pod "client-containers-d53a408f-3082-41a9-8017-950fc74d99f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007789813s
May 31 18:15:57.467: INFO: Pod "client-containers-d53a408f-3082-41a9-8017-950fc74d99f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016336442s
STEP: Saw pod success
May 31 18:15:57.467: INFO: Pod "client-containers-d53a408f-3082-41a9-8017-950fc74d99f6" satisfied condition "Succeeded or Failed"
May 31 18:15:57.473: INFO: Trying to get logs from node kind-worker2 pod client-containers-d53a408f-3082-41a9-8017-950fc74d99f6 container test-container: <nil>
STEP: delete the pod
May 31 18:15:57.514: INFO: Waiting for pod client-containers-d53a408f-3082-41a9-8017-950fc74d99f6 to disappear
May 31 18:15:57.519: INFO: Pod client-containers-d53a408f-3082-41a9-8017-950fc74d99f6 no longer exists
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
May 31 18:15:57.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2786" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":292,"completed":94,"skipped":1709,"failed":0}
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
... skipping 9 lines ...
STEP: creating the pod
May 31 18:15:57.584: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
May 31 18:16:07.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9402" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":292,"completed":95,"skipped":1711,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-configmap-x22h
STEP: Creating a pod to test atomic-volume-subpath
May 31 18:16:07.122: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-x22h" in namespace "subpath-9931" to be "Succeeded or Failed"
May 31 18:16:07.124: INFO: Pod "pod-subpath-test-configmap-x22h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205005ms
May 31 18:16:09.128: INFO: Pod "pod-subpath-test-configmap-x22h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006239719s
May 31 18:16:11.137: INFO: Pod "pod-subpath-test-configmap-x22h": Phase="Running", Reason="", readiness=true. Elapsed: 4.015046571s
May 31 18:16:13.143: INFO: Pod "pod-subpath-test-configmap-x22h": Phase="Running", Reason="", readiness=true. Elapsed: 6.021108797s
May 31 18:16:15.148: INFO: Pod "pod-subpath-test-configmap-x22h": Phase="Running", Reason="", readiness=true. Elapsed: 8.026117105s
May 31 18:16:17.159: INFO: Pod "pod-subpath-test-configmap-x22h": Phase="Running", Reason="", readiness=true. Elapsed: 10.036551551s
... skipping 2 lines ...
May 31 18:16:23.172: INFO: Pod "pod-subpath-test-configmap-x22h": Phase="Running", Reason="", readiness=true. Elapsed: 16.049411664s
May 31 18:16:25.176: INFO: Pod "pod-subpath-test-configmap-x22h": Phase="Running", Reason="", readiness=true. Elapsed: 18.053969067s
May 31 18:16:27.184: INFO: Pod "pod-subpath-test-configmap-x22h": Phase="Running", Reason="", readiness=true. Elapsed: 20.061531197s
May 31 18:16:29.188: INFO: Pod "pod-subpath-test-configmap-x22h": Phase="Running", Reason="", readiness=true. Elapsed: 22.065552579s
May 31 18:16:31.196: INFO: Pod "pod-subpath-test-configmap-x22h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.073750404s
STEP: Saw pod success
May 31 18:16:31.196: INFO: Pod "pod-subpath-test-configmap-x22h" satisfied condition "Succeeded or Failed"
May 31 18:16:31.203: INFO: Trying to get logs from node kind-worker pod pod-subpath-test-configmap-x22h container test-container-subpath-configmap-x22h: <nil>
STEP: delete the pod
May 31 18:16:31.244: INFO: Waiting for pod pod-subpath-test-configmap-x22h to disappear
May 31 18:16:31.250: INFO: Pod pod-subpath-test-configmap-x22h no longer exists
STEP: Deleting pod pod-subpath-test-configmap-x22h
May 31 18:16:31.250: INFO: Deleting pod "pod-subpath-test-configmap-x22h" in namespace "subpath-9931"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
May 31 18:16:31.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9931" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":292,"completed":96,"skipped":1733,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
May 31 18:16:31.270: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on node default medium
May 31 18:16:31.331: INFO: Waiting up to 5m0s for pod "pod-329ceadd-9349-439d-a1ec-ef3ca77c9705" in namespace "emptydir-9086" to be "Succeeded or Failed"
May 31 18:16:31.335: INFO: Pod "pod-329ceadd-9349-439d-a1ec-ef3ca77c9705": Phase="Pending", Reason="", readiness=false. Elapsed: 4.190196ms
May 31 18:16:33.344: INFO: Pod "pod-329ceadd-9349-439d-a1ec-ef3ca77c9705": Phase="Running", Reason="", readiness=true. Elapsed: 2.012851915s
May 31 18:16:35.352: INFO: Pod "pod-329ceadd-9349-439d-a1ec-ef3ca77c9705": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021465596s
STEP: Saw pod success
May 31 18:16:35.353: INFO: Pod "pod-329ceadd-9349-439d-a1ec-ef3ca77c9705" satisfied condition "Succeeded or Failed"
May 31 18:16:35.362: INFO: Trying to get logs from node kind-worker2 pod pod-329ceadd-9349-439d-a1ec-ef3ca77c9705 container test-container: <nil>
STEP: delete the pod
May 31 18:16:35.379: INFO: Waiting for pod pod-329ceadd-9349-439d-a1ec-ef3ca77c9705 to disappear
May 31 18:16:35.382: INFO: Pod pod-329ceadd-9349-439d-a1ec-ef3ca77c9705 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
May 31 18:16:35.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9086" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":97,"skipped":1736,"failed":0}
SSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
May 31 18:16:39.449: INFO: Initial restart count of pod liveness-334aef93-e710-4a2d-b186-0abf5186430a is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
May 31 18:20:40.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9454" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":292,"completed":98,"skipped":1739,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should patch a secret [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
... skipping 10 lines ...
STEP: deleting the secret using a LabelSelector
STEP: listing secrets in all namespaces, searching for label name and value in patch
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
May 31 18:20:40.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5481" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":292,"completed":99,"skipped":1753,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 24 lines ...
  test/e2e/framework/framework.go:175
May 31 18:20:47.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4653" for this suite.
STEP: Destroying namespace "webhook-4653-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":292,"completed":100,"skipped":1781,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 31 18:20:48.120: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9d3cc857-df13-4c62-b4f1-d99413fcc85d" in namespace "projected-7856" to be "Succeeded or Failed"
May 31 18:20:48.134: INFO: Pod "downwardapi-volume-9d3cc857-df13-4c62-b4f1-d99413fcc85d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.57007ms
May 31 18:20:50.148: INFO: Pod "downwardapi-volume-9d3cc857-df13-4c62-b4f1-d99413fcc85d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025918618s
May 31 18:20:52.156: INFO: Pod "downwardapi-volume-9d3cc857-df13-4c62-b4f1-d99413fcc85d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034127914s
STEP: Saw pod success
May 31 18:20:52.156: INFO: Pod "downwardapi-volume-9d3cc857-df13-4c62-b4f1-d99413fcc85d" satisfied condition "Succeeded or Failed"
May 31 18:20:52.162: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-9d3cc857-df13-4c62-b4f1-d99413fcc85d container client-container: <nil>
STEP: delete the pod
May 31 18:20:52.224: INFO: Waiting for pod downwardapi-volume-9d3cc857-df13-4c62-b4f1-d99413fcc85d to disappear
May 31 18:20:52.229: INFO: Pod downwardapi-volume-9d3cc857-df13-4c62-b4f1-d99413fcc85d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
May 31 18:20:52.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7856" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":292,"completed":101,"skipped":1786,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 21 lines ...
May 31 18:21:08.431: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
May 31 18:21:08.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8297" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":292,"completed":102,"skipped":1800,"failed":0}

------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
May 31 18:21:08.460: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on node default medium
May 31 18:21:08.522: INFO: Waiting up to 5m0s for pod "pod-5876dd76-a631-486e-9465-466b359ddec6" in namespace "emptydir-814" to be "Succeeded or Failed"
May 31 18:21:08.531: INFO: Pod "pod-5876dd76-a631-486e-9465-466b359ddec6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.23399ms
May 31 18:21:10.540: INFO: Pod "pod-5876dd76-a631-486e-9465-466b359ddec6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017224224s
May 31 18:21:12.551: INFO: Pod "pod-5876dd76-a631-486e-9465-466b359ddec6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028257115s
STEP: Saw pod success
May 31 18:21:12.551: INFO: Pod "pod-5876dd76-a631-486e-9465-466b359ddec6" satisfied condition "Succeeded or Failed"
May 31 18:21:12.556: INFO: Trying to get logs from node kind-worker2 pod pod-5876dd76-a631-486e-9465-466b359ddec6 container test-container: <nil>
STEP: delete the pod
May 31 18:21:12.580: INFO: Waiting for pod pod-5876dd76-a631-486e-9465-466b359ddec6 to disappear
May 31 18:21:12.582: INFO: Pod pod-5876dd76-a631-486e-9465-466b359ddec6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
May 31 18:21:12.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-814" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":103,"skipped":1800,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
... skipping 9 lines ...
STEP: creating the pod
May 31 18:21:12.632: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
May 31 18:21:17.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3345" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":292,"completed":104,"skipped":1810,"failed":0}
S
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 3 lines ...
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  test/e2e/common/pods.go:179
[It] should contain environment variables for services [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
May 31 18:21:21.128: INFO: Waiting up to 5m0s for pod "client-envvars-7cd5f50d-97ab-441c-a74d-8ba1de10cbcf" in namespace "pods-7818" to be "Succeeded or Failed"
May 31 18:21:21.131: INFO: Pod "client-envvars-7cd5f50d-97ab-441c-a74d-8ba1de10cbcf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.291423ms
May 31 18:21:23.136: INFO: Pod "client-envvars-7cd5f50d-97ab-441c-a74d-8ba1de10cbcf": Phase="Running", Reason="", readiness=true. Elapsed: 2.008303497s
May 31 18:21:25.144: INFO: Pod "client-envvars-7cd5f50d-97ab-441c-a74d-8ba1de10cbcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0157737s
STEP: Saw pod success
May 31 18:21:25.144: INFO: Pod "client-envvars-7cd5f50d-97ab-441c-a74d-8ba1de10cbcf" satisfied condition "Succeeded or Failed"
May 31 18:21:25.151: INFO: Trying to get logs from node kind-worker pod client-envvars-7cd5f50d-97ab-441c-a74d-8ba1de10cbcf container env3cont: <nil>
STEP: delete the pod
May 31 18:21:25.187: INFO: Waiting for pod client-envvars-7cd5f50d-97ab-441c-a74d-8ba1de10cbcf to disappear
May 31 18:21:25.192: INFO: Pod client-envvars-7cd5f50d-97ab-441c-a74d-8ba1de10cbcf no longer exists
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
May 31 18:21:25.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7818" for this suite.
•{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":292,"completed":105,"skipped":1811,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
May 31 18:21:25.202: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
May 31 18:21:25.244: INFO: Waiting up to 5m0s for pod "downward-api-9d986c7a-b7b8-407b-9688-94422f3d8ebd" in namespace "downward-api-9009" to be "Succeeded or Failed"
May 31 18:21:25.248: INFO: Pod "downward-api-9d986c7a-b7b8-407b-9688-94422f3d8ebd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009205ms
May 31 18:21:27.255: INFO: Pod "downward-api-9d986c7a-b7b8-407b-9688-94422f3d8ebd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01076964s
May 31 18:21:29.263: INFO: Pod "downward-api-9d986c7a-b7b8-407b-9688-94422f3d8ebd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019328033s
STEP: Saw pod success
May 31 18:21:29.264: INFO: Pod "downward-api-9d986c7a-b7b8-407b-9688-94422f3d8ebd" satisfied condition "Succeeded or Failed"
May 31 18:21:29.270: INFO: Trying to get logs from node kind-worker pod downward-api-9d986c7a-b7b8-407b-9688-94422f3d8ebd container dapi-container: <nil>
STEP: delete the pod
May 31 18:21:29.302: INFO: Waiting for pod downward-api-9d986c7a-b7b8-407b-9688-94422f3d8ebd to disappear
May 31 18:21:29.307: INFO: Pod downward-api-9d986c7a-b7b8-407b-9688-94422f3d8ebd no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
May 31 18:21:29.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9009" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":292,"completed":106,"skipped":1834,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
May 31 18:21:36.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-8874" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/crd_conversion_webhook.go:137
•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":292,"completed":107,"skipped":1854,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 34 lines ...
May 31 18:21:43.072: INFO: stdout: "service/rm3 exposed\n"
May 31 18:21:43.080: INFO: Service rm3 in namespace kubectl-1873 found.
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
May 31 18:21:45.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1873" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":292,"completed":108,"skipped":1859,"failed":0}
S
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 34 lines ...

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
May 31 18:21:46.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
W0531 18:21:46.224475   11886 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
STEP: Destroying namespace "gc-6836" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":292,"completed":109,"skipped":1860,"failed":0}
SSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl diff 
  should check if kubectl diff finds a difference for Deployments [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 17 lines ...
May 31 18:21:48.919: INFO: stderr: ""
May 31 18:21:48.919: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
May 31 18:21:48.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5489" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":292,"completed":110,"skipped":1866,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 12 lines ...
STEP: reading a file in the container
May 31 18:21:54.611: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl exec --namespace=svcaccounts-8388 pod-service-account-2c2e155f-f855-4725-95d8-988280c96014 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  test/e2e/framework/framework.go:175
May 31 18:21:55.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-8388" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":292,"completed":111,"skipped":1880,"failed":0}
S
------------------------------
[k8s.io] Variable Expansion 
  should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 31 18:21:55.224: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating the pod with failed condition
STEP: updating the pod
May 31 18:23:55.828: INFO: Successfully updated pod "var-expansion-ce88540b-c884-4825-9212-7b1d99cf376f"
STEP: waiting for pod running
STEP: deleting the pod gracefully
May 31 18:23:57.844: INFO: Deleting pod "var-expansion-ce88540b-c884-4825-9212-7b1d99cf376f" in namespace "var-expansion-1240"
May 31 18:23:57.855: INFO: Wait up to 5m0s for pod "var-expansion-ce88540b-c884-4825-9212-7b1d99cf376f" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
May 31 18:24:37.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1240" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":292,"completed":112,"skipped":1881,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 26 lines ...
May 31 18:25:28.002: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-8437 /api/v1/namespaces/watch-8437/configmaps/e2e-watch-test-configmap-b 5874c8e8-a182-4412-8022-886a2308fcfd 13411 0 2020-05-31 18:25:17 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-05-31 18:25:17 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
May 31 18:25:28.002: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-8437 /api/v1/namespaces/watch-8437/configmaps/e2e-watch-test-configmap-b 5874c8e8-a182-4412-8022-886a2308fcfd 13411 0 2020-05-31 18:25:17 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-05-31 18:25:17 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
May 31 18:25:38.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8437" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":292,"completed":113,"skipped":1888,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
May 31 18:25:38.026: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
May 31 18:25:38.095: INFO: Waiting up to 5m0s for pod "downward-api-6f79c7b4-b17f-4a53-9dc5-8ca9e4326d53" in namespace "downward-api-8063" to be "Succeeded or Failed"
May 31 18:25:38.099: INFO: Pod "downward-api-6f79c7b4-b17f-4a53-9dc5-8ca9e4326d53": Phase="Pending", Reason="", readiness=false. Elapsed: 3.915423ms
May 31 18:25:40.114: INFO: Pod "downward-api-6f79c7b4-b17f-4a53-9dc5-8ca9e4326d53": Phase="Running", Reason="", readiness=true. Elapsed: 2.019059784s
May 31 18:25:42.120: INFO: Pod "downward-api-6f79c7b4-b17f-4a53-9dc5-8ca9e4326d53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025280623s
STEP: Saw pod success
May 31 18:25:42.120: INFO: Pod "downward-api-6f79c7b4-b17f-4a53-9dc5-8ca9e4326d53" satisfied condition "Succeeded or Failed"
May 31 18:25:42.126: INFO: Trying to get logs from node kind-worker2 pod downward-api-6f79c7b4-b17f-4a53-9dc5-8ca9e4326d53 container dapi-container: <nil>
STEP: delete the pod
May 31 18:25:42.168: INFO: Waiting for pod downward-api-6f79c7b4-b17f-4a53-9dc5-8ca9e4326d53 to disappear
May 31 18:25:42.175: INFO: Pod downward-api-6f79c7b4-b17f-4a53-9dc5-8ca9e4326d53 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
May 31 18:25:42.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8063" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":292,"completed":114,"skipped":1899,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-fd1b57d6-755b-45f5-94c6-a8cda8b846ee
STEP: Creating a pod to test consume secrets
May 31 18:25:42.270: INFO: Waiting up to 5m0s for pod "pod-secrets-28b71580-19de-418b-8335-3e19d55e2800" in namespace "secrets-4348" to be "Succeeded or Failed"
May 31 18:25:42.274: INFO: Pod "pod-secrets-28b71580-19de-418b-8335-3e19d55e2800": Phase="Pending", Reason="", readiness=false. Elapsed: 3.83284ms
May 31 18:25:44.280: INFO: Pod "pod-secrets-28b71580-19de-418b-8335-3e19d55e2800": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009159863s
May 31 18:25:46.285: INFO: Pod "pod-secrets-28b71580-19de-418b-8335-3e19d55e2800": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014114947s
STEP: Saw pod success
May 31 18:25:46.286: INFO: Pod "pod-secrets-28b71580-19de-418b-8335-3e19d55e2800" satisfied condition "Succeeded or Failed"
May 31 18:25:46.290: INFO: Trying to get logs from node kind-worker2 pod pod-secrets-28b71580-19de-418b-8335-3e19d55e2800 container secret-volume-test: <nil>
STEP: delete the pod
May 31 18:25:46.327: INFO: Waiting for pod pod-secrets-28b71580-19de-418b-8335-3e19d55e2800 to disappear
May 31 18:25:46.333: INFO: Pod pod-secrets-28b71580-19de-418b-8335-3e19d55e2800 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
May 31 18:25:46.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4348" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":115,"skipped":1961,"failed":0}
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-33c1acb6-83b5-4965-ad48-97ce478b393b
STEP: Creating a pod to test consume configMaps
May 31 18:25:46.434: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ec31499f-db04-4b3b-8c6f-8c8cff44ee3d" in namespace "projected-2022" to be "Succeeded or Failed"
May 31 18:25:46.439: INFO: Pod "pod-projected-configmaps-ec31499f-db04-4b3b-8c6f-8c8cff44ee3d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.642624ms
May 31 18:25:48.447: INFO: Pod "pod-projected-configmaps-ec31499f-db04-4b3b-8c6f-8c8cff44ee3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013056831s
May 31 18:25:50.456: INFO: Pod "pod-projected-configmaps-ec31499f-db04-4b3b-8c6f-8c8cff44ee3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022155066s
STEP: Saw pod success
May 31 18:25:50.456: INFO: Pod "pod-projected-configmaps-ec31499f-db04-4b3b-8c6f-8c8cff44ee3d" satisfied condition "Succeeded or Failed"
May 31 18:25:50.464: INFO: Trying to get logs from node kind-worker2 pod pod-projected-configmaps-ec31499f-db04-4b3b-8c6f-8c8cff44ee3d container projected-configmap-volume-test: <nil>
STEP: delete the pod
May 31 18:25:50.490: INFO: Waiting for pod pod-projected-configmaps-ec31499f-db04-4b3b-8c6f-8c8cff44ee3d to disappear
May 31 18:25:50.494: INFO: Pod pod-projected-configmaps-ec31499f-db04-4b3b-8c6f-8c8cff44ee3d no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
May 31 18:25:50.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2022" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":292,"completed":116,"skipped":1962,"failed":0}

------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 13 lines ...
May 31 18:25:50.587: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-3198 /api/v1/namespaces/watch-3198/configmaps/e2e-watch-test-resource-version 4022d6b8-be63-42b2-abb1-03398388ebd9 13547 0 2020-05-31 18:25:50 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-05-31 18:25:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
May 31 18:25:50.587: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-3198 /api/v1/namespaces/watch-3198/configmaps/e2e-watch-test-resource-version 4022d6b8-be63-42b2-abb1-03398388ebd9 13548 0 2020-05-31 18:25:50 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-05-31 18:25:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
May 31 18:25:50.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3198" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":292,"completed":117,"skipped":1962,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 8 lines ...
May 31 18:25:50.688: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"97c2231e-0793-4794-a8b2-a3d6a75b45f4", Controller:(*bool)(0xc003ba9726), BlockOwnerDeletion:(*bool)(0xc003ba9727)}}
May 31 18:25:50.696: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"9d871eb9-ba29-4cf7-8cd3-9e368d6c9aea", Controller:(*bool)(0xc003b3de86), BlockOwnerDeletion:(*bool)(0xc003b3de87)}}
[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
May 31 18:25:55.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1905" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":292,"completed":118,"skipped":1978,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 31 18:25:55.760: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b0f42531-83c4-427f-802e-44b87c9f6397" in namespace "projected-6776" to be "Succeeded or Failed"
May 31 18:25:55.764: INFO: Pod "downwardapi-volume-b0f42531-83c4-427f-802e-44b87c9f6397": Phase="Pending", Reason="", readiness=false. Elapsed: 3.588912ms
May 31 18:25:57.775: INFO: Pod "downwardapi-volume-b0f42531-83c4-427f-802e-44b87c9f6397": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014652248s
May 31 18:25:59.783: INFO: Pod "downwardapi-volume-b0f42531-83c4-427f-802e-44b87c9f6397": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022582691s
STEP: Saw pod success
May 31 18:25:59.783: INFO: Pod "downwardapi-volume-b0f42531-83c4-427f-802e-44b87c9f6397" satisfied condition "Succeeded or Failed"
May 31 18:25:59.790: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-b0f42531-83c4-427f-802e-44b87c9f6397 container client-container: <nil>
STEP: delete the pod
May 31 18:25:59.831: INFO: Waiting for pod downwardapi-volume-b0f42531-83c4-427f-802e-44b87c9f6397 to disappear
May 31 18:25:59.838: INFO: Pod downwardapi-volume-b0f42531-83c4-427f-802e-44b87c9f6397 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
May 31 18:25:59.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6776" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":292,"completed":119,"skipped":2008,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
May 31 18:25:59.855: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test override all
May 31 18:25:59.900: INFO: Waiting up to 5m0s for pod "client-containers-579f613e-d2f3-4ad0-b42b-ab9d8347d36a" in namespace "containers-922" to be "Succeeded or Failed"
May 31 18:25:59.904: INFO: Pod "client-containers-579f613e-d2f3-4ad0-b42b-ab9d8347d36a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.76187ms
May 31 18:26:01.915: INFO: Pod "client-containers-579f613e-d2f3-4ad0-b42b-ab9d8347d36a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013051438s
May 31 18:26:03.923: INFO: Pod "client-containers-579f613e-d2f3-4ad0-b42b-ab9d8347d36a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021556476s
STEP: Saw pod success
May 31 18:26:03.923: INFO: Pod "client-containers-579f613e-d2f3-4ad0-b42b-ab9d8347d36a" satisfied condition "Succeeded or Failed"
May 31 18:26:03.928: INFO: Trying to get logs from node kind-worker2 pod client-containers-579f613e-d2f3-4ad0-b42b-ab9d8347d36a container test-container: <nil>
STEP: delete the pod
May 31 18:26:03.952: INFO: Waiting for pod client-containers-579f613e-d2f3-4ad0-b42b-ab9d8347d36a to disappear
May 31 18:26:03.958: INFO: Pod client-containers-579f613e-d2f3-4ad0-b42b-ab9d8347d36a no longer exists
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
May 31 18:26:03.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-922" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":292,"completed":120,"skipped":2032,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
May 31 18:26:11.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2874" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":292,"completed":121,"skipped":2046,"failed":0}
SS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
May 31 18:26:14.441: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
May 31 18:26:14.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-468" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":292,"completed":122,"skipped":2048,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 7 lines ...
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
May 31 18:26:19.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3246" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":292,"completed":123,"skipped":2059,"failed":0}

------------------------------
[sig-network] Services 
  should find a service from listing all namespaces [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 10 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
May 31 18:26:19.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6088" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":292,"completed":124,"skipped":2059,"failed":0}
S
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
May 31 18:26:23.487: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:23.500: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:23.524: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:23.528: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:23.534: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:23.542: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:23.556: INFO: Lookups using dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5232.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5232.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local jessie_udp@dns-test-service-2.dns-5232.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5232.svc.cluster.local]

May 31 18:26:28.567: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:28.579: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:28.590: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:28.602: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:28.630: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:28.636: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:28.643: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:28.649: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:28.660: INFO: Lookups using dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5232.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5232.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local jessie_udp@dns-test-service-2.dns-5232.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5232.svc.cluster.local]

May 31 18:26:33.567: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:33.572: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:33.579: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:33.583: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:33.611: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:33.619: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:33.626: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:33.635: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:33.651: INFO: Lookups using dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5232.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5232.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local jessie_udp@dns-test-service-2.dns-5232.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5232.svc.cluster.local]

May 31 18:26:38.591: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:38.640: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:38.656: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:38.672: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:38.726: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:38.746: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:38.772: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:38.793: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:38.826: INFO: Lookups using dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5232.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5232.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local jessie_udp@dns-test-service-2.dns-5232.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5232.svc.cluster.local]

May 31 18:26:43.568: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:43.578: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:43.594: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:43.603: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:43.618: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:43.623: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:43.629: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:43.634: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:43.644: INFO: Lookups using dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5232.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5232.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local jessie_udp@dns-test-service-2.dns-5232.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5232.svc.cluster.local]

May 31 18:26:48.571: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:48.579: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:48.590: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:48.595: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:48.614: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:48.619: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:48.623: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:48.628: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5232.svc.cluster.local from pod dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887: the server could not find the requested resource (get pods dns-test-fc92fabd-7aff-4332-befc-99585d69a887)
May 31 18:26:48.639: INFO: Lookups using dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5232.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5232.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5232.svc.cluster.local jessie_udp@dns-test-service-2.dns-5232.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5232.svc.cluster.local]

May 31 18:26:53.707: INFO: DNS probes using dns-5232/dns-test-fc92fabd-7aff-4332-befc-99585d69a887 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
May 31 18:26:53.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5232" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":292,"completed":125,"skipped":2060,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating secret secrets-5477/secret-test-37372345-afbb-460f-8de8-0fd06ac07cab
STEP: Creating a pod to test consume secrets
May 31 18:26:53.954: INFO: Waiting up to 5m0s for pod "pod-configmaps-9fc3bf76-72f1-4a7f-acd2-cf4277c1c420" in namespace "secrets-5477" to be "Succeeded or Failed"
May 31 18:26:53.962: INFO: Pod "pod-configmaps-9fc3bf76-72f1-4a7f-acd2-cf4277c1c420": Phase="Pending", Reason="", readiness=false. Elapsed: 7.528412ms
May 31 18:26:55.976: INFO: Pod "pod-configmaps-9fc3bf76-72f1-4a7f-acd2-cf4277c1c420": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021303871s
May 31 18:26:57.992: INFO: Pod "pod-configmaps-9fc3bf76-72f1-4a7f-acd2-cf4277c1c420": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037393928s
STEP: Saw pod success
May 31 18:26:57.992: INFO: Pod "pod-configmaps-9fc3bf76-72f1-4a7f-acd2-cf4277c1c420" satisfied condition "Succeeded or Failed"
May 31 18:26:58.002: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-9fc3bf76-72f1-4a7f-acd2-cf4277c1c420 container env-test: <nil>
STEP: delete the pod
May 31 18:26:58.048: INFO: Waiting for pod pod-configmaps-9fc3bf76-72f1-4a7f-acd2-cf4277c1c420 to disappear
May 31 18:26:58.058: INFO: Pod pod-configmaps-9fc3bf76-72f1-4a7f-acd2-cf4277c1c420 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
May 31 18:26:58.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5477" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":292,"completed":126,"skipped":2088,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 24 lines ...
  test/e2e/framework/framework.go:175
May 31 18:27:04.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9557" for this suite.
STEP: Destroying namespace "webhook-9557-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":292,"completed":127,"skipped":2185,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 101 lines ...
May 31 18:27:12.741: INFO: Pod "webserver-deployment-84855cf797-zbj9m" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-zbj9m webserver-deployment-84855cf797- deployment-1186 /api/v1/namespaces/deployment-1186/pods/webserver-deployment-84855cf797-zbj9m 4bdb9196-a62c-4fdc-9deb-3a0a16f68c9e 14486 0 2020-05-31 18:27:12 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 26f9701d-afde-4bd3-a356-a45afd43f8b6 0xc002224550 0xc002224551}] []  [{kube-controller-manager Update v1 2020-05-31 18:27:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26f9701d-afde-4bd3-a356-a45afd43f8b6\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-64mzp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-64mzp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-64mzp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 18:27:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
May 31 18:27:12.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1186" for this suite.
•{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":292,"completed":128,"skipped":2200,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-map-7619ff86-986b-4926-b23a-4dcb6eea5ef9
STEP: Creating a pod to test consume secrets
May 31 18:27:13.107: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8f8c53a6-de56-4b44-ad3d-b395e4fe5aee" in namespace "projected-7198" to be "Succeeded or Failed"
May 31 18:27:13.188: INFO: Pod "pod-projected-secrets-8f8c53a6-de56-4b44-ad3d-b395e4fe5aee": Phase="Pending", Reason="", readiness=false. Elapsed: 81.331892ms
May 31 18:27:15.195: INFO: Pod "pod-projected-secrets-8f8c53a6-de56-4b44-ad3d-b395e4fe5aee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088443075s
May 31 18:27:17.212: INFO: Pod "pod-projected-secrets-8f8c53a6-de56-4b44-ad3d-b395e4fe5aee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105666677s
May 31 18:27:19.232: INFO: Pod "pod-projected-secrets-8f8c53a6-de56-4b44-ad3d-b395e4fe5aee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125625243s
May 31 18:27:21.248: INFO: Pod "pod-projected-secrets-8f8c53a6-de56-4b44-ad3d-b395e4fe5aee": Phase="Pending", Reason="", readiness=false. Elapsed: 8.141652686s
May 31 18:27:23.263: INFO: Pod "pod-projected-secrets-8f8c53a6-de56-4b44-ad3d-b395e4fe5aee": Phase="Running", Reason="", readiness=true. Elapsed: 10.156355756s
May 31 18:27:25.274: INFO: Pod "pod-projected-secrets-8f8c53a6-de56-4b44-ad3d-b395e4fe5aee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.167271033s
STEP: Saw pod success
May 31 18:27:25.274: INFO: Pod "pod-projected-secrets-8f8c53a6-de56-4b44-ad3d-b395e4fe5aee" satisfied condition "Succeeded or Failed"
May 31 18:27:25.280: INFO: Trying to get logs from node kind-worker2 pod pod-projected-secrets-8f8c53a6-de56-4b44-ad3d-b395e4fe5aee container projected-secret-volume-test: <nil>
STEP: delete the pod
May 31 18:27:25.319: INFO: Waiting for pod pod-projected-secrets-8f8c53a6-de56-4b44-ad3d-b395e4fe5aee to disappear
May 31 18:27:25.330: INFO: Pod pod-projected-secrets-8f8c53a6-de56-4b44-ad3d-b395e4fe5aee no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
May 31 18:27:25.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7198" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":129,"skipped":2205,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-1a16f084-260a-4989-9ea6-4d95445837b3
STEP: Creating a pod to test consume configMaps
May 31 18:27:25.431: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c09db1c0-8239-4e3d-90eb-09af29b76cb5" in namespace "projected-5113" to be "Succeeded or Failed"
May 31 18:27:25.435: INFO: Pod "pod-projected-configmaps-c09db1c0-8239-4e3d-90eb-09af29b76cb5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.188496ms
May 31 18:27:27.440: INFO: Pod "pod-projected-configmaps-c09db1c0-8239-4e3d-90eb-09af29b76cb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00900665s
May 31 18:27:29.454: INFO: Pod "pod-projected-configmaps-c09db1c0-8239-4e3d-90eb-09af29b76cb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023033628s
STEP: Saw pod success
May 31 18:27:29.454: INFO: Pod "pod-projected-configmaps-c09db1c0-8239-4e3d-90eb-09af29b76cb5" satisfied condition "Succeeded or Failed"
May 31 18:27:29.468: INFO: Trying to get logs from node kind-worker2 pod pod-projected-configmaps-c09db1c0-8239-4e3d-90eb-09af29b76cb5 container projected-configmap-volume-test: <nil>
STEP: delete the pod
May 31 18:27:29.508: INFO: Waiting for pod pod-projected-configmaps-c09db1c0-8239-4e3d-90eb-09af29b76cb5 to disappear
May 31 18:27:29.518: INFO: Pod pod-projected-configmaps-c09db1c0-8239-4e3d-90eb-09af29b76cb5 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
May 31 18:27:29.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5113" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":292,"completed":130,"skipped":2210,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 19 lines ...
May 31 18:27:36.831: INFO: stderr: ""
May 31 18:27:36.832: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
May 31 18:27:36.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8785" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":292,"completed":131,"skipped":2235,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] PreStop
... skipping 26 lines ...
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  test/e2e/framework/framework.go:175
May 31 18:27:50.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-8002" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":292,"completed":132,"skipped":2247,"failed":0}

------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 10 lines ...
May 31 18:28:04.158: INFO: >>> kubeConfig: /root/.kube/kind-test-config
May 31 18:28:07.826: INFO: >>> kubeConfig: /root/.kube/kind-test-config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
May 31 18:28:21.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-443" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":292,"completed":133,"skipped":2247,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-4c08039f-7e92-4cbc-8d36-74f5ed90d8d6
STEP: Creating a pod to test consume secrets
May 31 18:28:22.019: INFO: Waiting up to 5m0s for pod "pod-secrets-2ec2fe17-664a-424f-90d7-b70f467c0049" in namespace "secrets-4892" to be "Succeeded or Failed"
May 31 18:28:22.023: INFO: Pod "pod-secrets-2ec2fe17-664a-424f-90d7-b70f467c0049": Phase="Pending", Reason="", readiness=false. Elapsed: 3.807599ms
May 31 18:28:24.028: INFO: Pod "pod-secrets-2ec2fe17-664a-424f-90d7-b70f467c0049": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009398541s
STEP: Saw pod success
May 31 18:28:24.028: INFO: Pod "pod-secrets-2ec2fe17-664a-424f-90d7-b70f467c0049" satisfied condition "Succeeded or Failed"
May 31 18:28:24.032: INFO: Trying to get logs from node kind-worker2 pod pod-secrets-2ec2fe17-664a-424f-90d7-b70f467c0049 container secret-env-test: <nil>
STEP: delete the pod
May 31 18:28:24.052: INFO: Waiting for pod pod-secrets-2ec2fe17-664a-424f-90d7-b70f467c0049 to disappear
May 31 18:28:24.057: INFO: Pod pod-secrets-2ec2fe17-664a-424f-90d7-b70f467c0049 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
May 31 18:28:24.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4892" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":292,"completed":134,"skipped":2252,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 141 lines ...
May 31 18:28:49.516: INFO: stderr: ""
May 31 18:28:49.516: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
May 31 18:28:49.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6711" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":292,"completed":135,"skipped":2270,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
May 31 18:28:49.530: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on tmpfs
May 31 18:28:49.600: INFO: Waiting up to 5m0s for pod "pod-e50f5232-cb56-45f5-a0d0-ec904b2ed2d0" in namespace "emptydir-2277" to be "Succeeded or Failed"
May 31 18:28:49.612: INFO: Pod "pod-e50f5232-cb56-45f5-a0d0-ec904b2ed2d0": Phase="Pending", Reason="", readiness=false. Elapsed: 11.920421ms
May 31 18:28:51.619: INFO: Pod "pod-e50f5232-cb56-45f5-a0d0-ec904b2ed2d0": Phase="Running", Reason="", readiness=true. Elapsed: 2.019638631s
May 31 18:28:53.628: INFO: Pod "pod-e50f5232-cb56-45f5-a0d0-ec904b2ed2d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028239527s
STEP: Saw pod success
May 31 18:28:53.628: INFO: Pod "pod-e50f5232-cb56-45f5-a0d0-ec904b2ed2d0" satisfied condition "Succeeded or Failed"
May 31 18:28:53.632: INFO: Trying to get logs from node kind-worker pod pod-e50f5232-cb56-45f5-a0d0-ec904b2ed2d0 container test-container: <nil>
STEP: delete the pod
May 31 18:28:53.683: INFO: Waiting for pod pod-e50f5232-cb56-45f5-a0d0-ec904b2ed2d0 to disappear
May 31 18:28:53.686: INFO: Pod pod-e50f5232-cb56-45f5-a0d0-ec904b2ed2d0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
May 31 18:28:53.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2277" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":136,"skipped":2288,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 31 18:28:53.736: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c64bcbe2-e513-4aac-8b0d-d3e80e79aae8" in namespace "projected-8174" to be "Succeeded or Failed"
May 31 18:28:53.742: INFO: Pod "downwardapi-volume-c64bcbe2-e513-4aac-8b0d-d3e80e79aae8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.139013ms
May 31 18:28:55.748: INFO: Pod "downwardapi-volume-c64bcbe2-e513-4aac-8b0d-d3e80e79aae8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011823343s
May 31 18:28:57.754: INFO: Pod "downwardapi-volume-c64bcbe2-e513-4aac-8b0d-d3e80e79aae8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017737384s
STEP: Saw pod success
May 31 18:28:57.754: INFO: Pod "downwardapi-volume-c64bcbe2-e513-4aac-8b0d-d3e80e79aae8" satisfied condition "Succeeded or Failed"
May 31 18:28:57.762: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-c64bcbe2-e513-4aac-8b0d-d3e80e79aae8 container client-container: <nil>
STEP: delete the pod
May 31 18:28:57.788: INFO: Waiting for pod downwardapi-volume-c64bcbe2-e513-4aac-8b0d-d3e80e79aae8 to disappear
May 31 18:28:57.791: INFO: Pod downwardapi-volume-c64bcbe2-e513-4aac-8b0d-d3e80e79aae8 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
May 31 18:28:57.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8174" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":292,"completed":137,"skipped":2319,"failed":0}
SSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 64 lines ...
May 31 18:31:30.538: INFO: Waiting for statefulset status.replicas updated to 0
May 31 18:31:30.548: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
May 31 18:31:30.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-154" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":292,"completed":138,"skipped":2325,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-0595cd50-9175-435a-a478-0237b70066c2
STEP: Creating a pod to test consume configMaps
May 31 18:31:30.630: INFO: Waiting up to 5m0s for pod "pod-configmaps-21cea0e1-5bd1-4cd8-9879-08e5d2bb7169" in namespace "configmap-7701" to be "Succeeded or Failed"
May 31 18:31:30.650: INFO: Pod "pod-configmaps-21cea0e1-5bd1-4cd8-9879-08e5d2bb7169": Phase="Pending", Reason="", readiness=false. Elapsed: 19.608043ms
May 31 18:31:32.658: INFO: Pod "pod-configmaps-21cea0e1-5bd1-4cd8-9879-08e5d2bb7169": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027150308s
May 31 18:31:34.669: INFO: Pod "pod-configmaps-21cea0e1-5bd1-4cd8-9879-08e5d2bb7169": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039019164s
STEP: Saw pod success
May 31 18:31:34.670: INFO: Pod "pod-configmaps-21cea0e1-5bd1-4cd8-9879-08e5d2bb7169" satisfied condition "Succeeded or Failed"
May 31 18:31:34.678: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-21cea0e1-5bd1-4cd8-9879-08e5d2bb7169 container configmap-volume-test: <nil>
STEP: delete the pod
May 31 18:31:34.720: INFO: Waiting for pod pod-configmaps-21cea0e1-5bd1-4cd8-9879-08e5d2bb7169 to disappear
May 31 18:31:34.724: INFO: Pod pod-configmaps-21cea0e1-5bd1-4cd8-9879-08e5d2bb7169 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
May 31 18:31:34.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7701" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":292,"completed":139,"skipped":2350,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 25 lines ...
  test/e2e/framework/framework.go:175
May 31 18:31:40.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9383" for this suite.
STEP: Destroying namespace "webhook-9383-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":292,"completed":140,"skipped":2351,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 15 lines ...
May 31 18:31:46.728: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  test/e2e/framework/framework.go:597
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
May 31 18:31:58.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6868" for this suite.
STEP: Destroying namespace "webhook-6868-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":292,"completed":141,"skipped":2368,"failed":0}
SS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
May 31 18:32:16.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9893" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":292,"completed":142,"skipped":2370,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 9 lines ...
STEP: creating pod
May 31 18:32:18.655: INFO: Pod pod-hostip-ebf5d714-942a-4fb6-85ab-0d19441d1b13 has hostIP: 172.18.0.4
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
May 31 18:32:18.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1155" for this suite.
•{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":292,"completed":143,"skipped":2406,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 31 18:32:18.716: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e753968a-4783-4291-9d26-946b2cec3d0e" in namespace "projected-2597" to be "Succeeded or Failed"
May 31 18:32:18.728: INFO: Pod "downwardapi-volume-e753968a-4783-4291-9d26-946b2cec3d0e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.22924ms
May 31 18:32:20.734: INFO: Pod "downwardapi-volume-e753968a-4783-4291-9d26-946b2cec3d0e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017095476s
May 31 18:32:22.739: INFO: Pod "downwardapi-volume-e753968a-4783-4291-9d26-946b2cec3d0e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022781759s
STEP: Saw pod success
May 31 18:32:22.739: INFO: Pod "downwardapi-volume-e753968a-4783-4291-9d26-946b2cec3d0e" satisfied condition "Succeeded or Failed"
May 31 18:32:22.744: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-e753968a-4783-4291-9d26-946b2cec3d0e container client-container: <nil>
STEP: delete the pod
May 31 18:32:22.763: INFO: Waiting for pod downwardapi-volume-e753968a-4783-4291-9d26-946b2cec3d0e to disappear
May 31 18:32:22.770: INFO: Pod downwardapi-volume-e753968a-4783-4291-9d26-946b2cec3d0e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
May 31 18:32:22.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2597" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":292,"completed":144,"skipped":2415,"failed":0}
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
May 31 18:32:26.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9991" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":145,"skipped":2420,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 31 18:32:26.927: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7abdb3f4-4449-4909-9787-0b665cc8633e" in namespace "downward-api-9416" to be "Succeeded or Failed"
May 31 18:32:26.933: INFO: Pod "downwardapi-volume-7abdb3f4-4449-4909-9787-0b665cc8633e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.907539ms
May 31 18:32:28.940: INFO: Pod "downwardapi-volume-7abdb3f4-4449-4909-9787-0b665cc8633e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012278525s
May 31 18:32:30.947: INFO: Pod "downwardapi-volume-7abdb3f4-4449-4909-9787-0b665cc8633e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019591099s
STEP: Saw pod success
May 31 18:32:30.947: INFO: Pod "downwardapi-volume-7abdb3f4-4449-4909-9787-0b665cc8633e" satisfied condition "Succeeded or Failed"
May 31 18:32:30.952: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-7abdb3f4-4449-4909-9787-0b665cc8633e container client-container: <nil>
STEP: delete the pod
May 31 18:32:30.971: INFO: Waiting for pod downwardapi-volume-7abdb3f4-4449-4909-9787-0b665cc8633e to disappear
May 31 18:32:30.973: INFO: Pod downwardapi-volume-7abdb3f4-4449-4909-9787-0b665cc8633e no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
May 31 18:32:30.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9416" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":292,"completed":146,"skipped":2451,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-c7f8d0f3-fda3-46eb-8988-46580b1e2a37
STEP: Creating a pod to test consume secrets
May 31 18:32:31.052: INFO: Waiting up to 5m0s for pod "pod-secrets-dbe7fade-a391-4b36-b428-64f2fd96c6a5" in namespace "secrets-8563" to be "Succeeded or Failed"
May 31 18:32:31.055: INFO: Pod "pod-secrets-dbe7fade-a391-4b36-b428-64f2fd96c6a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.87404ms
May 31 18:32:33.064: INFO: Pod "pod-secrets-dbe7fade-a391-4b36-b428-64f2fd96c6a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011863645s
May 31 18:32:35.074: INFO: Pod "pod-secrets-dbe7fade-a391-4b36-b428-64f2fd96c6a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021813138s
STEP: Saw pod success
May 31 18:32:35.074: INFO: Pod "pod-secrets-dbe7fade-a391-4b36-b428-64f2fd96c6a5" satisfied condition "Succeeded or Failed"
May 31 18:32:35.079: INFO: Trying to get logs from node kind-worker2 pod pod-secrets-dbe7fade-a391-4b36-b428-64f2fd96c6a5 container secret-volume-test: <nil>
STEP: delete the pod
May 31 18:32:35.104: INFO: Waiting for pod pod-secrets-dbe7fade-a391-4b36-b428-64f2fd96c6a5 to disappear
May 31 18:32:35.107: INFO: Pod pod-secrets-dbe7fade-a391-4b36-b428-64f2fd96c6a5 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
May 31 18:32:35.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8563" for this suite.
STEP: Destroying namespace "secret-namespace-6094" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":292,"completed":147,"skipped":2479,"failed":0}
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 31 18:32:35.159: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1eedc912-3ed7-443d-83fe-5497c8b47dc9" in namespace "downward-api-5920" to be "Succeeded or Failed"
May 31 18:32:35.163: INFO: Pod "downwardapi-volume-1eedc912-3ed7-443d-83fe-5497c8b47dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.450641ms
May 31 18:32:37.170: INFO: Pod "downwardapi-volume-1eedc912-3ed7-443d-83fe-5497c8b47dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010559602s
May 31 18:32:39.179: INFO: Pod "downwardapi-volume-1eedc912-3ed7-443d-83fe-5497c8b47dc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019679527s
STEP: Saw pod success
May 31 18:32:39.179: INFO: Pod "downwardapi-volume-1eedc912-3ed7-443d-83fe-5497c8b47dc9" satisfied condition "Succeeded or Failed"
May 31 18:32:39.186: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-1eedc912-3ed7-443d-83fe-5497c8b47dc9 container client-container: <nil>
STEP: delete the pod
May 31 18:32:39.223: INFO: Waiting for pod downwardapi-volume-1eedc912-3ed7-443d-83fe-5497c8b47dc9 to disappear
May 31 18:32:39.226: INFO: Pod downwardapi-volume-1eedc912-3ed7-443d-83fe-5497c8b47dc9 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
May 31 18:32:39.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5920" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":292,"completed":148,"skipped":2483,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 31 18:32:39.287: INFO: Waiting up to 5m0s for pod "downwardapi-volume-633c52ba-9d15-4168-b188-4176a987f122" in namespace "projected-128" to be "Succeeded or Failed"
May 31 18:32:39.290: INFO: Pod "downwardapi-volume-633c52ba-9d15-4168-b188-4176a987f122": Phase="Pending", Reason="", readiness=false. Elapsed: 3.331617ms
May 31 18:32:41.298: INFO: Pod "downwardapi-volume-633c52ba-9d15-4168-b188-4176a987f122": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011309725s
May 31 18:32:43.303: INFO: Pod "downwardapi-volume-633c52ba-9d15-4168-b188-4176a987f122": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016050044s
STEP: Saw pod success
May 31 18:32:43.303: INFO: Pod "downwardapi-volume-633c52ba-9d15-4168-b188-4176a987f122" satisfied condition "Succeeded or Failed"
May 31 18:32:43.307: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-633c52ba-9d15-4168-b188-4176a987f122 container client-container: <nil>
STEP: delete the pod
May 31 18:32:43.328: INFO: Waiting for pod downwardapi-volume-633c52ba-9d15-4168-b188-4176a987f122 to disappear
May 31 18:32:43.332: INFO: Pod downwardapi-volume-633c52ba-9d15-4168-b188-4176a987f122 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
May 31 18:32:43.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-128" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":149,"skipped":2491,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 25 lines ...
  test/e2e/framework/framework.go:175
May 31 18:32:49.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4162" for this suite.
STEP: Destroying namespace "webhook-4162-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":292,"completed":150,"skipped":2505,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 36 lines ...

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
May 31 18:32:49.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
W0531 18:32:49.824240   11886 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
STEP: Destroying namespace "gc-8051" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":292,"completed":151,"skipped":2516,"failed":0}
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 70 lines ...
May 31 18:33:12.912: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6233/pods","resourceVersion":"17116"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
May 31 18:33:12.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6233" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":292,"completed":152,"skipped":2517,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 27 lines ...
May 31 18:33:35.302: INFO: >>> kubeConfig: /root/.kube/kind-test-config
May 31 18:33:35.518: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
May 31 18:33:35.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8196" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":292,"completed":153,"skipped":2539,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
May 31 18:33:35.572: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-65b7efc0-3cac-4047-b88a-da0b95baf31e" in namespace "security-context-test-6474" to be "Succeeded or Failed"
May 31 18:33:35.575: INFO: Pod "alpine-nnp-false-65b7efc0-3cac-4047-b88a-da0b95baf31e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.155115ms
May 31 18:33:37.586: INFO: Pod "alpine-nnp-false-65b7efc0-3cac-4047-b88a-da0b95baf31e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014493429s
May 31 18:33:39.591: INFO: Pod "alpine-nnp-false-65b7efc0-3cac-4047-b88a-da0b95baf31e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019230465s
May 31 18:33:39.591: INFO: Pod "alpine-nnp-false-65b7efc0-3cac-4047-b88a-da0b95baf31e" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
May 31 18:33:39.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6474" for this suite.
•{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":154,"skipped":2550,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] version v1
... skipping 345 lines ...
May 31 18:33:52.262: INFO: Deleting ReplicationController proxy-service-z97zp took: 10.636259ms
May 31 18:33:52.572: INFO: Terminating ReplicationController proxy-service-z97zp pods took: 309.593827ms
[AfterEach] version v1
  test/e2e/framework/framework.go:175
May 31 18:33:54.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-6988" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":292,"completed":155,"skipped":2625,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 24 lines ...
  test/e2e/framework/framework.go:175
May 31 18:34:05.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8871" for this suite.
STEP: Destroying namespace "webhook-8871-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":292,"completed":156,"skipped":2642,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 17 lines ...
May 31 18:36:28.095: INFO: Restart count of pod container-probe-7524/liveness-965519a9-6898-4ffa-8d5c-a689bd4f6dca is now 5 (2m18.476775458s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
May 31 18:36:28.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7524" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":292,"completed":157,"skipped":2654,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 19 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
May 31 18:36:44.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8206" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":292,"completed":158,"skipped":2683,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 31 18:36:44.354: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a6b2f393-a9f0-489c-a31a-f112ab5e936f" in namespace "downward-api-856" to be "Succeeded or Failed"
May 31 18:36:44.358: INFO: Pod "downwardapi-volume-a6b2f393-a9f0-489c-a31a-f112ab5e936f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.528405ms
May 31 18:36:46.364: INFO: Pod "downwardapi-volume-a6b2f393-a9f0-489c-a31a-f112ab5e936f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010172661s
May 31 18:36:48.370: INFO: Pod "downwardapi-volume-a6b2f393-a9f0-489c-a31a-f112ab5e936f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016632031s
STEP: Saw pod success
May 31 18:36:48.370: INFO: Pod "downwardapi-volume-a6b2f393-a9f0-489c-a31a-f112ab5e936f" satisfied condition "Succeeded or Failed"
May 31 18:36:48.376: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-a6b2f393-a9f0-489c-a31a-f112ab5e936f container client-container: <nil>
STEP: delete the pod
May 31 18:36:48.418: INFO: Waiting for pod downwardapi-volume-a6b2f393-a9f0-489c-a31a-f112ab5e936f to disappear
May 31 18:36:48.422: INFO: Pod downwardapi-volume-a6b2f393-a9f0-489c-a31a-f112ab5e936f no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
May 31 18:36:48.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-856" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":292,"completed":159,"skipped":2709,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
... skipping 7 lines ...
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  test/e2e/framework/framework.go:175
May 31 18:36:48.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-3823" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":292,"completed":160,"skipped":2765,"failed":0}

------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 31 18:36:48.530: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cd5362ab-9ae4-422a-b56a-3c0f57aae243" in namespace "downward-api-9207" to be "Succeeded or Failed"
May 31 18:36:48.534: INFO: Pod "downwardapi-volume-cd5362ab-9ae4-422a-b56a-3c0f57aae243": Phase="Pending", Reason="", readiness=false. Elapsed: 3.347473ms
May 31 18:36:50.542: INFO: Pod "downwardapi-volume-cd5362ab-9ae4-422a-b56a-3c0f57aae243": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011783856s
May 31 18:36:52.552: INFO: Pod "downwardapi-volume-cd5362ab-9ae4-422a-b56a-3c0f57aae243": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022233363s
STEP: Saw pod success
May 31 18:36:52.552: INFO: Pod "downwardapi-volume-cd5362ab-9ae4-422a-b56a-3c0f57aae243" satisfied condition "Succeeded or Failed"
May 31 18:36:52.556: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-cd5362ab-9ae4-422a-b56a-3c0f57aae243 container client-container: <nil>
STEP: delete the pod
May 31 18:36:52.587: INFO: Waiting for pod downwardapi-volume-cd5362ab-9ae4-422a-b56a-3c0f57aae243 to disappear
May 31 18:36:52.591: INFO: Pod downwardapi-volume-cd5362ab-9ae4-422a-b56a-3c0f57aae243 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
May 31 18:36:52.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9207" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":161,"skipped":2765,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 7 lines ...
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
May 31 18:37:52.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9170" for this suite.
•{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":292,"completed":162,"skipped":2772,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 11 lines ...
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
May 31 18:37:52.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-208" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":292,"completed":163,"skipped":2779,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
May 31 18:37:52.784: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-fdbfae91-030e-4108-8cba-be9babb470d7" in namespace "security-context-test-1214" to be "Succeeded or Failed"
May 31 18:37:52.787: INFO: Pod "busybox-privileged-false-fdbfae91-030e-4108-8cba-be9babb470d7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.551095ms
May 31 18:37:54.793: INFO: Pod "busybox-privileged-false-fdbfae91-030e-4108-8cba-be9babb470d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008650288s
May 31 18:37:56.802: INFO: Pod "busybox-privileged-false-fdbfae91-030e-4108-8cba-be9babb470d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018522931s
May 31 18:37:56.802: INFO: Pod "busybox-privileged-false-fdbfae91-030e-4108-8cba-be9babb470d7" satisfied condition "Succeeded or Failed"
May 31 18:37:56.816: INFO: Got logs for pod "busybox-privileged-false-fdbfae91-030e-4108-8cba-be9babb470d7": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
May 31 18:37:56.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1214" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":164,"skipped":2786,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-map-b073ad6a-c7d0-4105-9534-f1944e400aad
STEP: Creating a pod to test consume secrets
May 31 18:37:56.883: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-80ee172f-e0d4-4f17-a004-f95e4cf3bc67" in namespace "projected-9060" to be "Succeeded or Failed"
May 31 18:37:56.888: INFO: Pod "pod-projected-secrets-80ee172f-e0d4-4f17-a004-f95e4cf3bc67": Phase="Pending", Reason="", readiness=false. Elapsed: 4.767909ms
May 31 18:37:58.892: INFO: Pod "pod-projected-secrets-80ee172f-e0d4-4f17-a004-f95e4cf3bc67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009060726s
May 31 18:38:00.899: INFO: Pod "pod-projected-secrets-80ee172f-e0d4-4f17-a004-f95e4cf3bc67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016103266s
STEP: Saw pod success
May 31 18:38:00.900: INFO: Pod "pod-projected-secrets-80ee172f-e0d4-4f17-a004-f95e4cf3bc67" satisfied condition "Succeeded or Failed"
May 31 18:38:00.906: INFO: Trying to get logs from node kind-worker2 pod pod-projected-secrets-80ee172f-e0d4-4f17-a004-f95e4cf3bc67 container projected-secret-volume-test: <nil>
STEP: delete the pod
May 31 18:38:00.938: INFO: Waiting for pod pod-projected-secrets-80ee172f-e0d4-4f17-a004-f95e4cf3bc67 to disappear
May 31 18:38:00.944: INFO: Pod pod-projected-secrets-80ee172f-e0d4-4f17-a004-f95e4cf3bc67 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
May 31 18:38:00.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9060" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":292,"completed":165,"skipped":2799,"failed":0}
SSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
May 31 18:38:00.956: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
May 31 18:38:01.070: INFO: Waiting up to 5m0s for pod "downward-api-e8b89c9c-642e-4b7b-b2d5-b4f280309a1b" in namespace "downward-api-1264" to be "Succeeded or Failed"
May 31 18:38:01.074: INFO: Pod "downward-api-e8b89c9c-642e-4b7b-b2d5-b4f280309a1b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.879504ms
May 31 18:38:03.077: INFO: Pod "downward-api-e8b89c9c-642e-4b7b-b2d5-b4f280309a1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007425437s
May 31 18:38:05.086: INFO: Pod "downward-api-e8b89c9c-642e-4b7b-b2d5-b4f280309a1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015862664s
STEP: Saw pod success
May 31 18:38:05.086: INFO: Pod "downward-api-e8b89c9c-642e-4b7b-b2d5-b4f280309a1b" satisfied condition "Succeeded or Failed"
May 31 18:38:05.093: INFO: Trying to get logs from node kind-worker2 pod downward-api-e8b89c9c-642e-4b7b-b2d5-b4f280309a1b container dapi-container: <nil>
STEP: delete the pod
May 31 18:38:05.134: INFO: Waiting for pod downward-api-e8b89c9c-642e-4b7b-b2d5-b4f280309a1b to disappear
May 31 18:38:05.138: INFO: Pod downward-api-e8b89c9c-642e-4b7b-b2d5-b4f280309a1b no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
May 31 18:38:05.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1264" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":292,"completed":166,"skipped":2808,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
May 31 18:38:12.573: INFO: stderr: ""
May 31 18:38:12.574: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6325-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     <empty>\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
May 31 18:38:16.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2302" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":292,"completed":167,"skipped":2815,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 9 lines ...
STEP: Updating configmap projected-configmap-test-upd-f41fb0c0-340b-4388-9225-1ea8dbb1ec51
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
May 31 18:38:22.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-398" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":168,"skipped":2827,"failed":0}
S
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
May 31 18:38:22.536: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-21383a58-0879-4c89-b18d-e41a75bc8497" in namespace "security-context-test-752" to be "Succeeded or Failed"
May 31 18:38:22.539: INFO: Pod "busybox-readonly-false-21383a58-0879-4c89-b18d-e41a75bc8497": Phase="Pending", Reason="", readiness=false. Elapsed: 3.676566ms
May 31 18:38:24.544: INFO: Pod "busybox-readonly-false-21383a58-0879-4c89-b18d-e41a75bc8497": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008607794s
May 31 18:38:26.552: INFO: Pod "busybox-readonly-false-21383a58-0879-4c89-b18d-e41a75bc8497": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015805607s
May 31 18:38:26.552: INFO: Pod "busybox-readonly-false-21383a58-0879-4c89-b18d-e41a75bc8497" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
May 31 18:38:26.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-752" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":292,"completed":169,"skipped":2828,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 9 lines ...
STEP: Creating the pod
May 31 18:38:31.171: INFO: Successfully updated pod "labelsupdateb9f41f36-b5e6-42ef-978e-a74c1e14f8d7"
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
May 31 18:38:33.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2190" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":292,"completed":170,"skipped":2840,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  test/e2e/framework/framework.go:175
May 31 18:38:39.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5550" for this suite.
STEP: Destroying namespace "webhook-5550-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":292,"completed":171,"skipped":2849,"failed":0}
SSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 9 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
May 31 18:38:39.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7432" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":292,"completed":172,"skipped":2852,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 22 lines ...
May 31 18:39:09.411: INFO: Waiting for statefulset status.replicas updated to 0
May 31 18:39:09.418: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
May 31 18:39:09.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6377" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":292,"completed":173,"skipped":2873,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Service endpoints latency
... skipping 418 lines ...
May 31 18:39:21.268: INFO: 99 %ile: 816.647425ms
May 31 18:39:21.268: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  test/e2e/framework/framework.go:175
May 31 18:39:21.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-7678" for this suite.
•{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":292,"completed":174,"skipped":2883,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 31 18:39:21.364: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1323ec8b-40b9-44fc-ad80-a7650c5c1344" in namespace "projected-3925" to be "Succeeded or Failed"
May 31 18:39:21.372: INFO: Pod "downwardapi-volume-1323ec8b-40b9-44fc-ad80-a7650c5c1344": Phase="Pending", Reason="", readiness=false. Elapsed: 8.721042ms
May 31 18:39:23.380: INFO: Pod "downwardapi-volume-1323ec8b-40b9-44fc-ad80-a7650c5c1344": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016463984s
May 31 18:39:25.384: INFO: Pod "downwardapi-volume-1323ec8b-40b9-44fc-ad80-a7650c5c1344": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020389504s
STEP: Saw pod success
May 31 18:39:25.384: INFO: Pod "downwardapi-volume-1323ec8b-40b9-44fc-ad80-a7650c5c1344" satisfied condition "Succeeded or Failed"
May 31 18:39:25.392: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-1323ec8b-40b9-44fc-ad80-a7650c5c1344 container client-container: <nil>
STEP: delete the pod
May 31 18:39:25.415: INFO: Waiting for pod downwardapi-volume-1323ec8b-40b9-44fc-ad80-a7650c5c1344 to disappear
May 31 18:39:25.419: INFO: Pod downwardapi-volume-1323ec8b-40b9-44fc-ad80-a7650c5c1344 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
May 31 18:39:25.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3925" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":292,"completed":175,"skipped":2923,"failed":0}
SS
------------------------------
[k8s.io] Variable Expansion 
  should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 35 lines ...
May 31 18:41:38.952: INFO: Deleting pod "var-expansion-fb767553-cff6-43e2-bd32-f9572c58505b" in namespace "var-expansion-212"
May 31 18:41:38.959: INFO: Wait up to 5m0s for pod "var-expansion-fb767553-cff6-43e2-bd32-f9572c58505b" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
May 31 18:42:16.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-212" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]","total":292,"completed":176,"skipped":2925,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-8a96789f-ffe8-4d16-b7c0-fcf1de062a47
STEP: Creating a pod to test consume configMaps
May 31 18:42:17.030: INFO: Waiting up to 5m0s for pod "pod-configmaps-5c372fa2-db1e-4831-9fa5-0217f27dbe07" in namespace "configmap-8028" to be "Succeeded or Failed"
May 31 18:42:17.032: INFO: Pod "pod-configmaps-5c372fa2-db1e-4831-9fa5-0217f27dbe07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.448675ms
May 31 18:42:19.039: INFO: Pod "pod-configmaps-5c372fa2-db1e-4831-9fa5-0217f27dbe07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009632582s
May 31 18:42:21.044: INFO: Pod "pod-configmaps-5c372fa2-db1e-4831-9fa5-0217f27dbe07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014450459s
STEP: Saw pod success
May 31 18:42:21.044: INFO: Pod "pod-configmaps-5c372fa2-db1e-4831-9fa5-0217f27dbe07" satisfied condition "Succeeded or Failed"
May 31 18:42:21.050: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-5c372fa2-db1e-4831-9fa5-0217f27dbe07 container configmap-volume-test: <nil>
STEP: delete the pod
May 31 18:42:21.086: INFO: Waiting for pod pod-configmaps-5c372fa2-db1e-4831-9fa5-0217f27dbe07 to disappear
May 31 18:42:21.088: INFO: Pod pod-configmaps-5c372fa2-db1e-4831-9fa5-0217f27dbe07 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
May 31 18:42:21.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8028" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":177,"skipped":2940,"failed":0}
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 31 18:42:21.163: INFO: Waiting up to 5m0s for pod "downwardapi-volume-18874d87-535d-4680-b598-a6d3435716c2" in namespace "projected-1948" to be "Succeeded or Failed"
May 31 18:42:21.168: INFO: Pod "downwardapi-volume-18874d87-535d-4680-b598-a6d3435716c2": Phase="Pending", Reason="", readiness=false. Elapsed: 5.237411ms
May 31 18:42:23.179: INFO: Pod "downwardapi-volume-18874d87-535d-4680-b598-a6d3435716c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016361885s
May 31 18:42:25.188: INFO: Pod "downwardapi-volume-18874d87-535d-4680-b598-a6d3435716c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024475868s
STEP: Saw pod success
May 31 18:42:25.188: INFO: Pod "downwardapi-volume-18874d87-535d-4680-b598-a6d3435716c2" satisfied condition "Succeeded or Failed"
May 31 18:42:25.196: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-18874d87-535d-4680-b598-a6d3435716c2 container client-container: <nil>
STEP: delete the pod
May 31 18:42:25.232: INFO: Waiting for pod downwardapi-volume-18874d87-535d-4680-b598-a6d3435716c2 to disappear
May 31 18:42:25.238: INFO: Pod downwardapi-volume-18874d87-535d-4680-b598-a6d3435716c2 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
May 31 18:42:25.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1948" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":292,"completed":178,"skipped":2944,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 71 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
May 31 18:42:46.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3691" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":292,"completed":179,"skipped":3023,"failed":0}
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 31 18:42:46.834: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  test/e2e/apps/daemon_set.go:134
[It] should retry creating failed daemon pods [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
May 31 18:42:46.911: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
May 31 18:42:46.916: INFO: Number of nodes with available pods: 0
May 31 18:42:46.916: INFO: Node kind-worker is running more than one daemon pod
... skipping 3 lines ...
May 31 18:42:48.932: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
May 31 18:42:48.962: INFO: Number of nodes with available pods: 0
May 31 18:42:48.962: INFO: Node kind-worker is running more than one daemon pod
May 31 18:42:49.927: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
May 31 18:42:49.938: INFO: Number of nodes with available pods: 2
May 31 18:42:49.938: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
May 31 18:42:49.987: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
May 31 18:42:49.998: INFO: Number of nodes with available pods: 2
May 31 18:42:49.998: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2769, will wait for the garbage collector to delete the pods
May 31 18:42:51.123: INFO: Deleting DaemonSet.extensions daemon-set took: 23.998361ms
May 31 18:42:51.524: INFO: Terminating DaemonSet.extensions daemon-set pods took: 401.302806ms
May 31 19:02:51.524: INFO: ERROR: Pod "daemon-set-j9pc4" still exists. Node: "kind-worker2"
May 31 19:02:51.525: FAIL: Unexpected error:
    <*errors.errorString | 0xc0032b08c0>: {
        s: "error while waiting for pods gone daemon-set: there are 1 pods left. E.g. \"daemon-set-j9pc4\" on node \"kind-worker2\"",
    }
    error while waiting for pods gone daemon-set: there are 1 pods left. E.g. "daemon-set-j9pc4" on node "kind-worker2"
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/apps.glob..func3.1()
	test/e2e/apps/daemon_set.go:107 +0x429
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00247c100)
... skipping 13 lines ...
May 31 19:02:51.535: INFO: At 2020-05-31 18:42:46 +0000 UTC - event for daemon-set-hqtq9: {default-scheduler } Scheduled: Successfully assigned daemonsets-2769/daemon-set-hqtq9 to kind-worker2
May 31 19:02:51.535: INFO: At 2020-05-31 18:42:46 +0000 UTC - event for daemon-set-xf5tw: {default-scheduler } Scheduled: Successfully assigned daemonsets-2769/daemon-set-xf5tw to kind-worker
May 31 19:02:51.535: INFO: At 2020-05-31 18:42:48 +0000 UTC - event for daemon-set-hqtq9: {kubelet kind-worker2} Pulled: Container image "docker.io/library/httpd:2.4.38-alpine" already present on machine
May 31 19:02:51.535: INFO: At 2020-05-31 18:42:48 +0000 UTC - event for daemon-set-hqtq9: {kubelet kind-worker2} Created: Created container app
May 31 19:02:51.535: INFO: At 2020-05-31 18:42:48 +0000 UTC - event for daemon-set-xf5tw: {kubelet kind-worker} Pulled: Container image "docker.io/library/httpd:2.4.38-alpine" already present on machine
May 31 19:02:51.535: INFO: At 2020-05-31 18:42:48 +0000 UTC - event for daemon-set-xf5tw: {kubelet kind-worker} Created: Created container app
May 31 19:02:51.535: INFO: At 2020-05-31 18:42:49 +0000 UTC - event for daemon-set: {daemonset-controller } FailedDaemonPod: Found failed daemon pod daemonsets-2769/daemon-set-hqtq9 on node kind-worker2, will try to kill it
May 31 19:02:51.535: INFO: At 2020-05-31 18:42:49 +0000 UTC - event for daemon-set-hqtq9: {kubelet kind-worker2} Started: Started container app
May 31 19:02:51.535: INFO: At 2020-05-31 18:42:49 +0000 UTC - event for daemon-set-xf5tw: {kubelet kind-worker} Started: Started container app
May 31 19:02:51.535: INFO: At 2020-05-31 18:42:50 +0000 UTC - event for daemon-set: {daemonset-controller } SuccessfulDelete: Deleted pod: daemon-set-hqtq9
May 31 19:02:51.535: INFO: At 2020-05-31 18:42:50 +0000 UTC - event for daemon-set: {daemonset-controller } SuccessfulCreate: Created pod: daemon-set-j9pc4
May 31 19:02:51.535: INFO: At 2020-05-31 18:42:50 +0000 UTC - event for daemon-set-j9pc4: {default-scheduler } Scheduled: Successfully assigned daemonsets-2769/daemon-set-j9pc4 to kind-worker2
May 31 19:02:51.535: INFO: At 2020-05-31 18:42:51 +0000 UTC - event for daemon-set-hqtq9: {kubelet kind-worker2} Failed: Error: sandbox container "dbe19d921b41b7c679896111548c6175adcced40702c4ec6e7f877f09a3e64b9" is not running
May 31 19:02:51.535: INFO: At 2020-05-31 18:42:51 +0000 UTC - event for daemon-set-j9pc4: {kubelet kind-worker2} Created: Created container app
May 31 19:02:51.535: INFO: At 2020-05-31 18:42:51 +0000 UTC - event for daemon-set-j9pc4: {kubelet kind-worker2} Pulled: Container image "docker.io/library/httpd:2.4.38-alpine" already present on machine
May 31 19:02:51.535: INFO: At 2020-05-31 18:42:51 +0000 UTC - event for daemon-set-xf5tw: {kubelet kind-worker} Killing: Stopping container app
May 31 19:02:51.535: INFO: At 2020-05-31 18:42:52 +0000 UTC - event for daemon-set-j9pc4: {kubelet kind-worker2} Started: Started container app
May 31 19:02:51.540: INFO: POD               NODE          PHASE    GRACE  CONDITIONS
May 31 19:02:51.540: INFO: daemon-set-j9pc4  kind-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-31 18:42:50 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-31 18:42:50 +0000 UTC ContainersNotReady containers with unready status: [app]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-31 18:42:50 +0000 UTC ContainersNotReady containers with unready status: [app]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-31 18:42:50 +0000 UTC  }]
... skipping 59 lines ...
May 31 19:02:52.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2769" for this suite.

• Failure in Spec Teardown (AfterEach) [1205.294 seconds]
[sig-apps] Daemon set [Serial]
test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance] [AfterEach]
  test/e2e/framework/framework.go:597

  May 31 19:02:51.525: Unexpected error:
      <*errors.errorString | 0xc0032b08c0>: {
          s: "error while waiting for pods gone daemon-set: there are 1 pods left. E.g. \"daemon-set-j9pc4\" on node \"kind-worker2\"",
      }
      error while waiting for pods gone daemon-set: there are 1 pods left. E.g. "daemon-set-j9pc4" on node "kind-worker2"
  occurred

  test/e2e/apps/daemon_set.go:107
------------------------------
{"msg":"FAILED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":292,"completed":179,"skipped":3026,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-configmap-vtkj
STEP: Creating a pod to test atomic-volume-subpath
May 31 19:02:52.203: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-vtkj" in namespace "subpath-6615" to be "Succeeded or Failed"
May 31 19:02:52.208: INFO: Pod "pod-subpath-test-configmap-vtkj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.802479ms
May 31 19:02:54.228: INFO: Pod "pod-subpath-test-configmap-vtkj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024765731s
May 31 19:02:56.235: INFO: Pod "pod-subpath-test-configmap-vtkj": Phase="Running", Reason="", readiness=true. Elapsed: 4.032032361s
May 31 19:02:58.240: INFO: Pod "pod-subpath-test-configmap-vtkj": Phase="Running", Reason="", readiness=true. Elapsed: 6.036789008s
May 31 19:03:00.247: INFO: Pod "pod-subpath-test-configmap-vtkj": Phase="Running", Reason="", readiness=true. Elapsed: 8.043937698s
May 31 19:03:02.262: INFO: Pod "pod-subpath-test-configmap-vtkj": Phase="Running", Reason="", readiness=true. Elapsed: 10.058891146s
... skipping 2 lines ...
May 31 19:03:08.292: INFO: Pod "pod-subpath-test-configmap-vtkj": Phase="Running", Reason="", readiness=true. Elapsed: 16.089101407s
May 31 19:03:10.302: INFO: Pod "pod-subpath-test-configmap-vtkj": Phase="Running", Reason="", readiness=true. Elapsed: 18.098656902s
May 31 19:03:12.308: INFO: Pod "pod-subpath-test-configmap-vtkj": Phase="Running", Reason="", readiness=true. Elapsed: 20.104961024s
May 31 19:03:14.314: INFO: Pod "pod-subpath-test-configmap-vtkj": Phase="Running", Reason="", readiness=true. Elapsed: 22.110790362s
May 31 19:03:16.320: INFO: Pod "pod-subpath-test-configmap-vtkj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.116771631s
STEP: Saw pod success
May 31 19:03:16.320: INFO: Pod "pod-subpath-test-configmap-vtkj" satisfied condition "Succeeded or Failed"
May 31 19:03:16.327: INFO: Trying to get logs from node kind-worker2 pod pod-subpath-test-configmap-vtkj container test-container-subpath-configmap-vtkj: <nil>
STEP: delete the pod
May 31 19:03:16.351: INFO: Waiting for pod pod-subpath-test-configmap-vtkj to disappear
May 31 19:03:16.354: INFO: Pod pod-subpath-test-configmap-vtkj no longer exists
STEP: Deleting pod pod-subpath-test-configmap-vtkj
May 31 19:03:16.354: INFO: Deleting pod "pod-subpath-test-configmap-vtkj" in namespace "subpath-6615"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
May 31 19:03:16.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6615" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":292,"completed":180,"skipped":3032,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 119 lines ...
May 31 19:04:10.787: INFO: Waiting for statefulset status.replicas updated to 0
May 31 19:04:10.790: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
May 31 19:04:10.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1871" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":292,"completed":181,"skipped":3065,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 46 lines ...
May 31 19:04:26.764: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5048/pods","resourceVersion":"25565"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
May 31 19:04:26.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5048" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":292,"completed":182,"skipped":3073,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
... skipping 18 lines ...
STEP: Deleting second CR
May 31 19:05:17.462: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-31T19:04:37Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-31T19:04:57Z]] name:name2 resourceVersion:25757 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:2b983c55-0333-4a0e-aeb5-0d149cd7127c] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
May 31 19:05:27.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-2880" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":292,"completed":183,"skipped":3076,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should print the output to logs [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
May 31 19:05:32.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4621" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":292,"completed":184,"skipped":3085,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 27 lines ...
May 31 19:06:44.382: INFO: Terminating ReplicationController wrapped-volume-race-a9874ffc-0e63-4f65-8a02-c58d030c3a43 pods took: 301.7025ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  test/e2e/framework/framework.go:175
May 31 19:06:57.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-324" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":292,"completed":185,"skipped":3137,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
May 31 19:06:57.366: INFO: stderr: ""
May 31 19:06:57.366: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-beta.0.313+46d08c89ab9f55\", GitCommit:\"46d08c89ab9f55bcaf23f0aa3742c53a72a7418a\", GitTreeState:\"clean\", BuildDate:\"2020-05-31T06:25:53Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-beta.0.313+46d08c89ab9f55\", GitCommit:\"46d08c89ab9f55bcaf23f0aa3742c53a72a7418a\", GitTreeState:\"clean\", BuildDate:\"2020-05-31T06:25:53Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
May 31 19:06:57.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1102" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":292,"completed":186,"skipped":3139,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  test/e2e/framework/framework.go:175
May 31 19:07:03.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3754" for this suite.
STEP: Destroying namespace "webhook-3754-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":292,"completed":187,"skipped":3143,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 9 lines ...
STEP: Creating the pod
May 31 19:07:08.067: INFO: Successfully updated pod "annotationupdateb127c94d-09c5-4096-91bd-c1b52649c428"
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
May 31 19:07:10.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1537" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":292,"completed":188,"skipped":3169,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 10 lines ...
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
May 31 19:07:14.683: INFO: Successfully updated pod "pod-update-activedeadlineseconds-d72fbef3-52a9-4b0e-9ede-fb40e09b1a24"
May 31 19:07:14.683: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-d72fbef3-52a9-4b0e-9ede-fb40e09b1a24" in namespace "pods-7792" to be "terminated due to deadline exceeded"
May 31 19:07:14.688: INFO: Pod "pod-update-activedeadlineseconds-d72fbef3-52a9-4b0e-9ede-fb40e09b1a24": Phase="Running", Reason="", readiness=true. Elapsed: 5.180257ms
May 31 19:07:16.703: INFO: Pod "pod-update-activedeadlineseconds-d72fbef3-52a9-4b0e-9ede-fb40e09b1a24": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.020270772s
May 31 19:07:16.703: INFO: Pod "pod-update-activedeadlineseconds-d72fbef3-52a9-4b0e-9ede-fb40e09b1a24" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
May 31 19:07:16.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7792" for this suite.
•{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":292,"completed":189,"skipped":3213,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 5 lines ...
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
May 31 19:07:20.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3984" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":292,"completed":190,"skipped":3216,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-map-9a196ca6-1dff-42cb-a0cf-e7b0c892d281
STEP: Creating a pod to test consume secrets
May 31 19:07:20.887: INFO: Waiting up to 5m0s for pod "pod-secrets-43568d87-a088-486a-9adc-b81024fbc1c6" in namespace "secrets-5396" to be "Succeeded or Failed"
May 31 19:07:20.891: INFO: Pod "pod-secrets-43568d87-a088-486a-9adc-b81024fbc1c6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.246861ms
May 31 19:07:22.899: INFO: Pod "pod-secrets-43568d87-a088-486a-9adc-b81024fbc1c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01128113s
May 31 19:07:24.903: INFO: Pod "pod-secrets-43568d87-a088-486a-9adc-b81024fbc1c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015169751s
STEP: Saw pod success
May 31 19:07:24.903: INFO: Pod "pod-secrets-43568d87-a088-486a-9adc-b81024fbc1c6" satisfied condition "Succeeded or Failed"
May 31 19:07:24.907: INFO: Trying to get logs from node kind-worker2 pod pod-secrets-43568d87-a088-486a-9adc-b81024fbc1c6 container secret-volume-test: <nil>
STEP: delete the pod
May 31 19:07:24.922: INFO: Waiting for pod pod-secrets-43568d87-a088-486a-9adc-b81024fbc1c6 to disappear
May 31 19:07:24.926: INFO: Pod pod-secrets-43568d87-a088-486a-9adc-b81024fbc1c6 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
May 31 19:07:24.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5396" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":292,"completed":191,"skipped":3274,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap configmap-1148/configmap-test-9065a608-846d-44ac-b10c-c62bba35c1a0
STEP: Creating a pod to test consume configMaps
May 31 19:07:24.976: INFO: Waiting up to 5m0s for pod "pod-configmaps-b91c8d05-271d-40b0-8eaa-1336e91529a1" in namespace "configmap-1148" to be "Succeeded or Failed"
May 31 19:07:24.979: INFO: Pod "pod-configmaps-b91c8d05-271d-40b0-8eaa-1336e91529a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.91311ms
May 31 19:07:26.984: INFO: Pod "pod-configmaps-b91c8d05-271d-40b0-8eaa-1336e91529a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00802866s
May 31 19:07:28.988: INFO: Pod "pod-configmaps-b91c8d05-271d-40b0-8eaa-1336e91529a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011738447s
STEP: Saw pod success
May 31 19:07:28.988: INFO: Pod "pod-configmaps-b91c8d05-271d-40b0-8eaa-1336e91529a1" satisfied condition "Succeeded or Failed"
May 31 19:07:28.991: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-b91c8d05-271d-40b0-8eaa-1336e91529a1 container env-test: <nil>
STEP: delete the pod
May 31 19:07:29.008: INFO: Waiting for pod pod-configmaps-b91c8d05-271d-40b0-8eaa-1336e91529a1 to disappear
May 31 19:07:29.011: INFO: Pod pod-configmaps-b91c8d05-271d-40b0-8eaa-1336e91529a1 no longer exists
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:175
May 31 19:07:29.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1148" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":292,"completed":192,"skipped":3281,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
May 31 19:07:29.052: INFO: >>> kubeConfig: /root/.kube/kind-test-config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
May 31 19:07:35.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5882" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":292,"completed":193,"skipped":3287,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 16 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
May 31 19:07:48.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1902" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":292,"completed":194,"skipped":3295,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 15 lines ...
STEP: deleting the pod gracefully
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
May 31 19:07:55.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5316" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":292,"completed":195,"skipped":3303,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 9 lines ...
[It] should be possible to delete [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
May 31 19:07:55.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2050" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":292,"completed":196,"skipped":3316,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}

------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-a873a7f2-1f46-4dd4-9b30-696b6fcf0ff2
STEP: Creating a pod to test consume configMaps
May 31 19:07:55.606: INFO: Waiting up to 5m0s for pod "pod-configmaps-0af04e85-0393-4710-8dc2-795632645ba2" in namespace "configmap-3408" to be "Succeeded or Failed"
May 31 19:07:55.618: INFO: Pod "pod-configmaps-0af04e85-0393-4710-8dc2-795632645ba2": Phase="Pending", Reason="", readiness=false. Elapsed: 11.369888ms
May 31 19:07:57.627: INFO: Pod "pod-configmaps-0af04e85-0393-4710-8dc2-795632645ba2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020792533s
May 31 19:07:59.635: INFO: Pod "pod-configmaps-0af04e85-0393-4710-8dc2-795632645ba2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028996313s
STEP: Saw pod success
May 31 19:07:59.636: INFO: Pod "pod-configmaps-0af04e85-0393-4710-8dc2-795632645ba2" satisfied condition "Succeeded or Failed"
May 31 19:07:59.639: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-0af04e85-0393-4710-8dc2-795632645ba2 container configmap-volume-test: <nil>
STEP: delete the pod
May 31 19:07:59.658: INFO: Waiting for pod pod-configmaps-0af04e85-0393-4710-8dc2-795632645ba2 to disappear
May 31 19:07:59.660: INFO: Pod pod-configmaps-0af04e85-0393-4710-8dc2-795632645ba2 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
May 31 19:07:59.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3408" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":292,"completed":197,"skipped":3316,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected combined
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-projected-all-test-volume-69586bef-bbdf-4a33-928d-6c064cb32e96
STEP: Creating secret with name secret-projected-all-test-volume-97bf5844-68fc-4a2b-8ef5-227eb6cba851
STEP: Creating a pod to test Check all projections for projected volume plugin
May 31 19:07:59.732: INFO: Waiting up to 5m0s for pod "projected-volume-628ad5a7-8221-428b-b45a-993d19de63ca" in namespace "projected-106" to be "Succeeded or Failed"
May 31 19:07:59.735: INFO: Pod "projected-volume-628ad5a7-8221-428b-b45a-993d19de63ca": Phase="Pending", Reason="", readiness=false. Elapsed: 3.070328ms
May 31 19:08:01.744: INFO: Pod "projected-volume-628ad5a7-8221-428b-b45a-993d19de63ca": Phase="Running", Reason="", readiness=true. Elapsed: 2.011719284s
May 31 19:08:03.752: INFO: Pod "projected-volume-628ad5a7-8221-428b-b45a-993d19de63ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019624502s
STEP: Saw pod success
May 31 19:08:03.752: INFO: Pod "projected-volume-628ad5a7-8221-428b-b45a-993d19de63ca" satisfied condition "Succeeded or Failed"
May 31 19:08:03.760: INFO: Trying to get logs from node kind-worker pod projected-volume-628ad5a7-8221-428b-b45a-993d19de63ca container projected-all-volume-test: <nil>
STEP: delete the pod
May 31 19:08:03.776: INFO: Waiting for pod projected-volume-628ad5a7-8221-428b-b45a-993d19de63ca to disappear
May 31 19:08:03.780: INFO: Pod projected-volume-628ad5a7-8221-428b-b45a-993d19de63ca no longer exists
[AfterEach] [sig-storage] Projected combined
  test/e2e/framework/framework.go:175
May 31 19:08:03.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-106" for this suite.
•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":292,"completed":198,"skipped":3324,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}

------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] Pods Extended
... skipping 10 lines ...
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  test/e2e/framework/framework.go:175
May 31 19:08:03.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4507" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":292,"completed":199,"skipped":3324,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
  test/e2e/framework/framework.go:175
May 31 19:08:09.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4340" for this suite.
STEP: Destroying namespace "webhook-4340-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":292,"completed":200,"skipped":3340,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 31 19:08:10.038: INFO: Waiting up to 5m0s for pod "downwardapi-volume-25dba21d-4023-4094-8b69-c19f0c6a8abc" in namespace "downward-api-8710" to be "Succeeded or Failed"
May 31 19:08:10.040: INFO: Pod "downwardapi-volume-25dba21d-4023-4094-8b69-c19f0c6a8abc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.399209ms
May 31 19:08:12.045: INFO: Pod "downwardapi-volume-25dba21d-4023-4094-8b69-c19f0c6a8abc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007463721s
May 31 19:08:14.051: INFO: Pod "downwardapi-volume-25dba21d-4023-4094-8b69-c19f0c6a8abc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013488972s
STEP: Saw pod success
May 31 19:08:14.051: INFO: Pod "downwardapi-volume-25dba21d-4023-4094-8b69-c19f0c6a8abc" satisfied condition "Succeeded or Failed"
May 31 19:08:14.059: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-25dba21d-4023-4094-8b69-c19f0c6a8abc container client-container: <nil>
STEP: delete the pod
May 31 19:08:14.084: INFO: Waiting for pod downwardapi-volume-25dba21d-4023-4094-8b69-c19f0c6a8abc to disappear
May 31 19:08:14.096: INFO: Pod downwardapi-volume-25dba21d-4023-4094-8b69-c19f0c6a8abc no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
May 31 19:08:14.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8710" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":201,"skipped":3342,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-projected-5wdk
STEP: Creating a pod to test atomic-volume-subpath
May 31 19:08:14.183: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-5wdk" in namespace "subpath-3533" to be "Succeeded or Failed"
May 31 19:08:14.188: INFO: Pod "pod-subpath-test-projected-5wdk": Phase="Pending", Reason="", readiness=false. Elapsed: 5.371062ms
May 31 19:08:16.196: INFO: Pod "pod-subpath-test-projected-5wdk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012523604s
May 31 19:08:18.200: INFO: Pod "pod-subpath-test-projected-5wdk": Phase="Running", Reason="", readiness=true. Elapsed: 4.017218424s
May 31 19:08:20.207: INFO: Pod "pod-subpath-test-projected-5wdk": Phase="Running", Reason="", readiness=true. Elapsed: 6.023844467s
May 31 19:08:22.212: INFO: Pod "pod-subpath-test-projected-5wdk": Phase="Running", Reason="", readiness=true. Elapsed: 8.029140472s
May 31 19:08:24.218: INFO: Pod "pod-subpath-test-projected-5wdk": Phase="Running", Reason="", readiness=true. Elapsed: 10.034673073s
... skipping 2 lines ...
May 31 19:08:30.234: INFO: Pod "pod-subpath-test-projected-5wdk": Phase="Running", Reason="", readiness=true. Elapsed: 16.050813888s
May 31 19:08:32.240: INFO: Pod "pod-subpath-test-projected-5wdk": Phase="Running", Reason="", readiness=true. Elapsed: 18.056677756s
May 31 19:08:34.250: INFO: Pod "pod-subpath-test-projected-5wdk": Phase="Running", Reason="", readiness=true. Elapsed: 20.066874325s
May 31 19:08:36.256: INFO: Pod "pod-subpath-test-projected-5wdk": Phase="Running", Reason="", readiness=true. Elapsed: 22.073407614s
May 31 19:08:38.263: INFO: Pod "pod-subpath-test-projected-5wdk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.079602981s
STEP: Saw pod success
May 31 19:08:38.263: INFO: Pod "pod-subpath-test-projected-5wdk" satisfied condition "Succeeded or Failed"
May 31 19:08:38.267: INFO: Trying to get logs from node kind-worker2 pod pod-subpath-test-projected-5wdk container test-container-subpath-projected-5wdk: <nil>
STEP: delete the pod
May 31 19:08:38.284: INFO: Waiting for pod pod-subpath-test-projected-5wdk to disappear
May 31 19:08:38.287: INFO: Pod pod-subpath-test-projected-5wdk no longer exists
STEP: Deleting pod pod-subpath-test-projected-5wdk
May 31 19:08:38.287: INFO: Deleting pod "pod-subpath-test-projected-5wdk" in namespace "subpath-3533"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
May 31 19:08:38.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3533" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":292,"completed":202,"skipped":3351,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 25 lines ...
May 31 19:08:43.382: INFO: Pod "test-cleanup-deployment-6688745694-gmv76" is not available:
&Pod{ObjectMeta:{test-cleanup-deployment-6688745694-gmv76 test-cleanup-deployment-6688745694- deployment-1426 /api/v1/namespaces/deployment-1426/pods/test-cleanup-deployment-6688745694-gmv76 22eb443b-5f4c-41a9-a236-a89d4623ba9b 27787 0 2020-05-31 19:08:43 +0000 UTC <nil> <nil> map[name:cleanup-pod pod-template-hash:6688745694] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-6688745694 6f6447f8-7ec5-4a25-b367-8609dc38574d 0xc00209c687 0xc00209c688}] []  [{kube-controller-manager Update v1 2020-05-31 19:08:43 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f6447f8-7ec5-4a25-b367-8609dc38574d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v8wvl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v8wvl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v8wvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
May 31 19:08:43.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1426" for this suite.
•{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":292,"completed":203,"skipped":3371,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 10 lines ...
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
May 31 19:08:48.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7020" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":292,"completed":204,"skipped":3392,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 9 lines ...
STEP: Creating the pod
May 31 19:08:53.079: INFO: Successfully updated pod "labelsupdatecc67195e-f0f0-47fb-8cb2-da475127e6f2"
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
May 31 19:08:55.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4651" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":292,"completed":205,"skipped":3413,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 26 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
May 31 19:09:05.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3330" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":292,"completed":206,"skipped":3431,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 76 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
May 31 19:09:10.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4439" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":292,"completed":207,"skipped":3448,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Job
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 31 19:09:10.792: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:175
May 31 19:09:20.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-4214" for this suite.
•{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":292,"completed":208,"skipped":3463,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 28 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
May 31 19:09:31.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3458" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":292,"completed":209,"skipped":3471,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}

------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 33 lines ...
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
May 31 19:09:37.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7733" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":292,"completed":210,"skipped":3471,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
May 31 19:09:37.375: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on node default medium
May 31 19:09:37.452: INFO: Waiting up to 5m0s for pod "pod-0e378b36-f564-4fd6-9120-02131a71db2b" in namespace "emptydir-8192" to be "Succeeded or Failed"
May 31 19:09:37.460: INFO: Pod "pod-0e378b36-f564-4fd6-9120-02131a71db2b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.764217ms
May 31 19:09:39.466: INFO: Pod "pod-0e378b36-f564-4fd6-9120-02131a71db2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014137324s
May 31 19:09:41.473: INFO: Pod "pod-0e378b36-f564-4fd6-9120-02131a71db2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020863386s
STEP: Saw pod success
May 31 19:09:41.474: INFO: Pod "pod-0e378b36-f564-4fd6-9120-02131a71db2b" satisfied condition "Succeeded or Failed"
May 31 19:09:41.483: INFO: Trying to get logs from node kind-worker2 pod pod-0e378b36-f564-4fd6-9120-02131a71db2b container test-container: <nil>
STEP: delete the pod
May 31 19:09:41.520: INFO: Waiting for pod pod-0e378b36-f564-4fd6-9120-02131a71db2b to disappear
May 31 19:09:41.526: INFO: Pod pod-0e378b36-f564-4fd6-9120-02131a71db2b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
May 31 19:09:41.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8192" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":211,"skipped":3472,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 36 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
May 31 19:09:57.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4962" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":292,"completed":212,"skipped":3487,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 36 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
May 31 19:09:58.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2779" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":292,"completed":213,"skipped":3509,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 28 lines ...
May 31 19:10:08.070: INFO: Pod "test-rolling-update-deployment-df7bb669b-nrn29" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-df7bb669b-nrn29 test-rolling-update-deployment-df7bb669b- deployment-2947 /api/v1/namespaces/deployment-2947/pods/test-rolling-update-deployment-df7bb669b-nrn29 bdb8141d-5bd7-4f60-a86d-108a457cd1af 28819 0 2020-05-31 19:10:03 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:df7bb669b] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-df7bb669b b9877dec-a111-4e6a-a4dd-1db676905a60 0xc002f8c4a0 0xc002f8c4a1}] []  [{kube-controller-manager Update v1 2020-05-31 19:10:03 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b9877dec-a111-4e6a-a4dd-1db676905a60\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-31 19:10:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.219\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2vjn5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2vjn5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2vjn5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 19:10:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 19:10:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 19:10:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-31 19:10:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.2.219,StartTime:2020-05-31 19:10:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-31 19:10:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://5cd43e90ef5eabb624b048201595f05eaee2fb9c0460785360cf2e098ed88ecc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.219,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
May 31 19:10:08.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2947" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":292,"completed":214,"skipped":3539,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 12 lines ...
May 31 19:10:13.140: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
May 31 19:10:14.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5097" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":292,"completed":215,"skipped":3543,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
May 31 19:10:14.711: INFO: stderr: ""
May 31 19:10:14.711: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://127.0.0.1:42753\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://127.0.0.1:42753/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
May 31 19:10:14.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5348" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":292,"completed":216,"skipped":3547,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
May 31 19:10:18.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4952" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":217,"skipped":3562,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 28 lines ...
  test/e2e/framework/framework.go:175
May 31 19:10:35.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9004" for this suite.
STEP: Destroying namespace "webhook-9004-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":292,"completed":218,"skipped":3591,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
May 31 19:10:35.338: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test env composition
May 31 19:10:35.391: INFO: Waiting up to 5m0s for pod "var-expansion-980450e6-c9b2-4bb8-8350-3da5ea671966" in namespace "var-expansion-1772" to be "Succeeded or Failed"
May 31 19:10:35.395: INFO: Pod "var-expansion-980450e6-c9b2-4bb8-8350-3da5ea671966": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017967ms
May 31 19:10:37.400: INFO: Pod "var-expansion-980450e6-c9b2-4bb8-8350-3da5ea671966": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008900173s
May 31 19:10:39.404: INFO: Pod "var-expansion-980450e6-c9b2-4bb8-8350-3da5ea671966": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013415541s
STEP: Saw pod success
May 31 19:10:39.404: INFO: Pod "var-expansion-980450e6-c9b2-4bb8-8350-3da5ea671966" satisfied condition "Succeeded or Failed"
May 31 19:10:39.409: INFO: Trying to get logs from node kind-worker pod var-expansion-980450e6-c9b2-4bb8-8350-3da5ea671966 container dapi-container: <nil>
STEP: delete the pod
May 31 19:10:39.438: INFO: Waiting for pod var-expansion-980450e6-c9b2-4bb8-8350-3da5ea671966 to disappear
May 31 19:10:39.440: INFO: Pod var-expansion-980450e6-c9b2-4bb8-8350-3da5ea671966 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
May 31 19:10:39.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1772" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":292,"completed":219,"skipped":3607,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
May 31 19:10:39.488: INFO: Asynchronously running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl kubectl --server=https://127.0.0.1:42753 --kubeconfig=/root/.kube/kind-test-config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
May 31 19:10:39.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7707" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":292,"completed":220,"skipped":3630,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 16 lines ...
  test/e2e/framework/framework.go:175
May 31 19:11:08.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-1948" for this suite.
STEP: Destroying namespace "nsdeletetest-5495" for this suite.
May 31 19:11:08.907: INFO: Namespace nsdeletetest-5495 was already deleted
STEP: Destroying namespace "nsdeletetest-1280" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":292,"completed":221,"skipped":3640,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-secret-5bnt
STEP: Creating a pod to test atomic-volume-subpath
May 31 19:11:08.968: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-5bnt" in namespace "subpath-5323" to be "Succeeded or Failed"
May 31 19:11:08.971: INFO: Pod "pod-subpath-test-secret-5bnt": Phase="Pending", Reason="", readiness=false. Elapsed: 3.659166ms
May 31 19:11:10.982: INFO: Pod "pod-subpath-test-secret-5bnt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01468687s
May 31 19:11:12.988: INFO: Pod "pod-subpath-test-secret-5bnt": Phase="Running", Reason="", readiness=true. Elapsed: 4.020310708s
May 31 19:11:14.997: INFO: Pod "pod-subpath-test-secret-5bnt": Phase="Running", Reason="", readiness=true. Elapsed: 6.029693552s
May 31 19:11:17.002: INFO: Pod "pod-subpath-test-secret-5bnt": Phase="Running", Reason="", readiness=true. Elapsed: 8.034737797s
May 31 19:11:19.015: INFO: Pod "pod-subpath-test-secret-5bnt": Phase="Running", Reason="", readiness=true. Elapsed: 10.046911269s
... skipping 2 lines ...
May 31 19:11:25.032: INFO: Pod "pod-subpath-test-secret-5bnt": Phase="Running", Reason="", readiness=true. Elapsed: 16.064715941s
May 31 19:11:27.041: INFO: Pod "pod-subpath-test-secret-5bnt": Phase="Running", Reason="", readiness=true. Elapsed: 18.073735808s
May 31 19:11:29.046: INFO: Pod "pod-subpath-test-secret-5bnt": Phase="Running", Reason="", readiness=true. Elapsed: 20.078193189s
May 31 19:11:31.051: INFO: Pod "pod-subpath-test-secret-5bnt": Phase="Running", Reason="", readiness=true. Elapsed: 22.083268497s
May 31 19:11:33.061: INFO: Pod "pod-subpath-test-secret-5bnt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.093258202s
STEP: Saw pod success
May 31 19:11:33.061: INFO: Pod "pod-subpath-test-secret-5bnt" satisfied condition "Succeeded or Failed"
May 31 19:11:33.068: INFO: Trying to get logs from node kind-worker pod pod-subpath-test-secret-5bnt container test-container-subpath-secret-5bnt: <nil>
STEP: delete the pod
May 31 19:11:33.095: INFO: Waiting for pod pod-subpath-test-secret-5bnt to disappear
May 31 19:11:33.098: INFO: Pod pod-subpath-test-secret-5bnt no longer exists
STEP: Deleting pod pod-subpath-test-secret-5bnt
May 31 19:11:33.098: INFO: Deleting pod "pod-subpath-test-secret-5bnt" in namespace "subpath-5323"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
May 31 19:11:33.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5323" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":292,"completed":222,"skipped":3687,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Job
... skipping 12 lines ...
May 31 19:11:37.527: INFO: Terminating Job.batch foo pods took: 300.266652ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:175
May 31 19:12:16.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-7589" for this suite.
•{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":292,"completed":223,"skipped":3710,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
S
------------------------------
[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] 
  evicts pods with minTolerationSeconds [Disruptive] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
... skipping 19 lines ...
May 31 19:13:56.748: INFO: Noticed Pod "taint-eviction-b2" gets evicted.
STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute
[AfterEach] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
  test/e2e/framework/framework.go:175
May 31 19:13:56.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "taint-multiple-pods-2429" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","total":292,"completed":224,"skipped":3711,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 22 lines ...
May 31 19:14:22.880: INFO: The status of Pod test-webserver-af5ccd50-dc9c-4b0f-8712-a95e1dcb6746 is Running (Ready = true)
May 31 19:14:22.887: INFO: Container started at 2020-05-31 19:13:58 +0000 UTC, pod became ready at 2020-05-31 19:14:21 +0000 UTC
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
May 31 19:14:22.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2364" for this suite.
•{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":292,"completed":225,"skipped":3733,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
May 31 19:14:26.964: INFO: Initial restart count of pod test-webserver-b054c311-ec3d-43bf-be63-f53b9f71cc1d is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
May 31 19:18:27.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7135" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":292,"completed":226,"skipped":3753,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 13 lines ...
May 31 19:19:24.072: INFO: Restart count of pod container-probe-471/busybox-91ddc840-6404-45e3-aa5c-9ed1dcc2f389 is now 1 (54.176714978s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
May 31 19:19:24.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-471" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":292,"completed":227,"skipped":3779,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...
May 31 19:19:26.391: INFO: >>> kubeConfig: /root/.kube/kind-test-config
May 31 19:19:26.677: INFO: Deleting pod dns-5726...
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
May 31 19:19:26.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5726" for this suite.
•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":292,"completed":228,"skipped":3784,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-5550496c-d29f-417f-aed4-25c400d082fa
STEP: Creating a pod to test consume configMaps
May 31 19:19:26.770: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-479767f8-7395-4a4b-ae0b-33b36a6abec4" in namespace "projected-8098" to be "Succeeded or Failed"
May 31 19:19:26.776: INFO: Pod "pod-projected-configmaps-479767f8-7395-4a4b-ae0b-33b36a6abec4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.737213ms
May 31 19:19:28.784: INFO: Pod "pod-projected-configmaps-479767f8-7395-4a4b-ae0b-33b36a6abec4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014604087s
May 31 19:19:30.792: INFO: Pod "pod-projected-configmaps-479767f8-7395-4a4b-ae0b-33b36a6abec4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02235559s
STEP: Saw pod success
May 31 19:19:30.792: INFO: Pod "pod-projected-configmaps-479767f8-7395-4a4b-ae0b-33b36a6abec4" satisfied condition "Succeeded or Failed"
May 31 19:19:30.798: INFO: Trying to get logs from node kind-worker pod pod-projected-configmaps-479767f8-7395-4a4b-ae0b-33b36a6abec4 container projected-configmap-volume-test: <nil>
STEP: delete the pod
May 31 19:19:30.835: INFO: Waiting for pod pod-projected-configmaps-479767f8-7395-4a4b-ae0b-33b36a6abec4 to disappear
May 31 19:19:30.839: INFO: Pod pod-projected-configmaps-479767f8-7395-4a4b-ae0b-33b36a6abec4 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
May 31 19:19:30.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8098" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":229,"skipped":3786,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
S
------------------------------
[sig-api-machinery] Events 
  should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Events
... skipping 11 lines ...
STEP: deleting the test event
STEP: listing all events in all namespaces
[AfterEach] [sig-api-machinery] Events
  test/e2e/framework/framework.go:175
May 31 19:19:30.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-8315" for this suite.
•{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":292,"completed":230,"skipped":3787,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
May 31 19:19:30.908: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on node default medium
May 31 19:19:30.970: INFO: Waiting up to 5m0s for pod "pod-bc98908c-a46e-4423-800f-318ea64f410c" in namespace "emptydir-4464" to be "Succeeded or Failed"
May 31 19:19:30.976: INFO: Pod "pod-bc98908c-a46e-4423-800f-318ea64f410c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.232592ms
May 31 19:19:32.987: INFO: Pod "pod-bc98908c-a46e-4423-800f-318ea64f410c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016972023s
May 31 19:19:34.996: INFO: Pod "pod-bc98908c-a46e-4423-800f-318ea64f410c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025959062s
STEP: Saw pod success
May 31 19:19:34.998: INFO: Pod "pod-bc98908c-a46e-4423-800f-318ea64f410c" satisfied condition "Succeeded or Failed"
May 31 19:19:35.008: INFO: Trying to get logs from node kind-worker pod pod-bc98908c-a46e-4423-800f-318ea64f410c container test-container: <nil>
STEP: delete the pod
May 31 19:19:35.026: INFO: Waiting for pod pod-bc98908c-a46e-4423-800f-318ea64f410c to disappear
May 31 19:19:35.028: INFO: Pod pod-bc98908c-a46e-4423-800f-318ea64f410c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
May 31 19:19:35.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4464" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":231,"skipped":3807,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 10 lines ...
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
May 31 19:19:52.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-98" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":292,"completed":232,"skipped":3850,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 10 lines ...
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
May 31 19:19:57.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2869" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":292,"completed":233,"skipped":3877,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 13 lines ...
May 31 19:19:59.310: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
May 31 19:20:00.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2488" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":292,"completed":234,"skipped":3896,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-c13dadd1-0506-47be-acd4-3c7334338ebe
STEP: Creating a pod to test consume secrets
May 31 19:20:00.434: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-54d57b7c-9768-45b2-a49d-3de5819e349e" in namespace "projected-4771" to be "Succeeded or Failed"
May 31 19:20:00.436: INFO: Pod "pod-projected-secrets-54d57b7c-9768-45b2-a49d-3de5819e349e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.434596ms
May 31 19:20:02.440: INFO: Pod "pod-projected-secrets-54d57b7c-9768-45b2-a49d-3de5819e349e": Phase="Running", Reason="", readiness=true. Elapsed: 2.006027928s
May 31 19:20:04.444: INFO: Pod "pod-projected-secrets-54d57b7c-9768-45b2-a49d-3de5819e349e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010631472s
STEP: Saw pod success
May 31 19:20:04.444: INFO: Pod "pod-projected-secrets-54d57b7c-9768-45b2-a49d-3de5819e349e" satisfied condition "Succeeded or Failed"
May 31 19:20:04.452: INFO: Trying to get logs from node kind-worker2 pod pod-projected-secrets-54d57b7c-9768-45b2-a49d-3de5819e349e container projected-secret-volume-test: <nil>
STEP: delete the pod
May 31 19:20:04.488: INFO: Waiting for pod pod-projected-secrets-54d57b7c-9768-45b2-a49d-3de5819e349e to disappear
May 31 19:20:04.492: INFO: Pod pod-projected-secrets-54d57b7c-9768-45b2-a49d-3de5819e349e no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
May 31 19:20:04.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4771" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":292,"completed":235,"skipped":3901,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
May 31 19:20:11.134: INFO: stderr: ""
May 31 19:20:11.134: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8623-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     <empty>\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
May 31 19:20:14.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4423" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":292,"completed":236,"skipped":3973,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-dfebb1b5-ac51-4db2-8f4a-a04f97050b37
STEP: Creating a pod to test consume configMaps
May 31 19:20:14.802: INFO: Waiting up to 5m0s for pod "pod-configmaps-139a4217-0331-4443-9041-89c0fbe9dc06" in namespace "configmap-2213" to be "Succeeded or Failed"
May 31 19:20:14.807: INFO: Pod "pod-configmaps-139a4217-0331-4443-9041-89c0fbe9dc06": Phase="Pending", Reason="", readiness=false. Elapsed: 4.334064ms
May 31 19:20:16.814: INFO: Pod "pod-configmaps-139a4217-0331-4443-9041-89c0fbe9dc06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012092926s
May 31 19:20:18.819: INFO: Pod "pod-configmaps-139a4217-0331-4443-9041-89c0fbe9dc06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016920551s
STEP: Saw pod success
May 31 19:20:18.819: INFO: Pod "pod-configmaps-139a4217-0331-4443-9041-89c0fbe9dc06" satisfied condition "Succeeded or Failed"
May 31 19:20:18.826: INFO: Trying to get logs from node kind-worker pod pod-configmaps-139a4217-0331-4443-9041-89c0fbe9dc06 container configmap-volume-test: <nil>
STEP: delete the pod
May 31 19:20:18.849: INFO: Waiting for pod pod-configmaps-139a4217-0331-4443-9041-89c0fbe9dc06 to disappear
May 31 19:20:18.852: INFO: Pod pod-configmaps-139a4217-0331-4443-9041-89c0fbe9dc06 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
May 31 19:20:18.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2213" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":237,"skipped":3989,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 12 lines ...
STEP: Creating configMap with name cm-test-opt-create-19e68b21-a668-47b1-b558-5a2c58574873
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
May 31 19:20:27.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7482" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":238,"skipped":3993,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
May 31 19:20:27.032: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on tmpfs
May 31 19:20:27.086: INFO: Waiting up to 5m0s for pod "pod-86618197-1566-4518-a1d5-c6dbe7bf1342" in namespace "emptydir-8011" to be "Succeeded or Failed"
May 31 19:20:27.088: INFO: Pod "pod-86618197-1566-4518-a1d5-c6dbe7bf1342": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208513ms
May 31 19:20:29.101: INFO: Pod "pod-86618197-1566-4518-a1d5-c6dbe7bf1342": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01480075s
May 31 19:20:31.110: INFO: Pod "pod-86618197-1566-4518-a1d5-c6dbe7bf1342": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024508851s
STEP: Saw pod success
May 31 19:20:31.110: INFO: Pod "pod-86618197-1566-4518-a1d5-c6dbe7bf1342" satisfied condition "Succeeded or Failed"
May 31 19:20:31.116: INFO: Trying to get logs from node kind-worker2 pod pod-86618197-1566-4518-a1d5-c6dbe7bf1342 container test-container: <nil>
STEP: delete the pod
May 31 19:20:31.152: INFO: Waiting for pod pod-86618197-1566-4518-a1d5-c6dbe7bf1342 to disappear
May 31 19:20:31.159: INFO: Pod pod-86618197-1566-4518-a1d5-c6dbe7bf1342 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
May 31 19:20:31.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8011" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":239,"skipped":3995,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 28 lines ...
May 31 19:20:39.519: INFO: stderr: ""
May 31 19:20:39.519: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
May 31 19:20:39.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7301" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":292,"completed":240,"skipped":4023,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SS
------------------------------
[sig-network] Services 
  should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 65 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
May 31 19:20:56.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9155" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":292,"completed":241,"skipped":4025,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-8721d312-44b5-4245-b289-c012f1b5ab91
STEP: Creating a pod to test consume configMaps
May 31 19:20:56.947: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-492b4e92-3e6c-4c71-bee4-cf24c56c800d" in namespace "projected-7893" to be "Succeeded or Failed"
May 31 19:20:56.954: INFO: Pod "pod-projected-configmaps-492b4e92-3e6c-4c71-bee4-cf24c56c800d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.369408ms
May 31 19:20:58.960: INFO: Pod "pod-projected-configmaps-492b4e92-3e6c-4c71-bee4-cf24c56c800d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012733136s
May 31 19:21:00.964: INFO: Pod "pod-projected-configmaps-492b4e92-3e6c-4c71-bee4-cf24c56c800d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017053303s
STEP: Saw pod success
May 31 19:21:00.964: INFO: Pod "pod-projected-configmaps-492b4e92-3e6c-4c71-bee4-cf24c56c800d" satisfied condition "Succeeded or Failed"
May 31 19:21:00.971: INFO: Trying to get logs from node kind-worker pod pod-projected-configmaps-492b4e92-3e6c-4c71-bee4-cf24c56c800d container projected-configmap-volume-test: <nil>
STEP: delete the pod
May 31 19:21:01.000: INFO: Waiting for pod pod-projected-configmaps-492b4e92-3e6c-4c71-bee4-cf24c56c800d to disappear
May 31 19:21:01.003: INFO: Pod pod-projected-configmaps-492b4e92-3e6c-4c71-bee4-cf24c56c800d no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
May 31 19:21:01.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7893" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":242,"skipped":4026,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
May 31 19:21:01.011: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on tmpfs
May 31 19:21:01.063: INFO: Waiting up to 5m0s for pod "pod-75f53840-c4f0-4fbf-8c26-427bba1f7e5c" in namespace "emptydir-3293" to be "Succeeded or Failed"
May 31 19:21:01.068: INFO: Pod "pod-75f53840-c4f0-4fbf-8c26-427bba1f7e5c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.156914ms
May 31 19:21:03.075: INFO: Pod "pod-75f53840-c4f0-4fbf-8c26-427bba1f7e5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011513376s
May 31 19:21:05.082: INFO: Pod "pod-75f53840-c4f0-4fbf-8c26-427bba1f7e5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019441365s
STEP: Saw pod success
May 31 19:21:05.083: INFO: Pod "pod-75f53840-c4f0-4fbf-8c26-427bba1f7e5c" satisfied condition "Succeeded or Failed"
May 31 19:21:05.094: INFO: Trying to get logs from node kind-worker2 pod pod-75f53840-c4f0-4fbf-8c26-427bba1f7e5c container test-container: <nil>
STEP: delete the pod
May 31 19:21:05.116: INFO: Waiting for pod pod-75f53840-c4f0-4fbf-8c26-427bba1f7e5c to disappear
May 31 19:21:05.124: INFO: Pod pod-75f53840-c4f0-4fbf-8c26-427bba1f7e5c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
May 31 19:21:05.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3293" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":243,"skipped":4027,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] Events
... skipping 16 lines ...
May 31 19:21:13.235: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  test/e2e/framework/framework.go:175
May 31 19:21:13.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-4474" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":292,"completed":244,"skipped":4030,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 12 lines ...
May 31 19:21:17.367: INFO: >>> kubeConfig: /root/.kube/kind-test-config
May 31 19:21:17.551: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
May 31 19:21:17.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3775" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":292,"completed":245,"skipped":4040,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 9 lines ...
[It] should have an terminated reason [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
May 31 19:21:21.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2308" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":292,"completed":246,"skipped":4050,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
May 31 19:21:21.623: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on node default medium
May 31 19:21:21.679: INFO: Waiting up to 5m0s for pod "pod-d82f26c6-977f-4be8-86a5-9c87a61d28a4" in namespace "emptydir-4164" to be "Succeeded or Failed"
May 31 19:21:21.683: INFO: Pod "pod-d82f26c6-977f-4be8-86a5-9c87a61d28a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062437ms
May 31 19:21:23.691: INFO: Pod "pod-d82f26c6-977f-4be8-86a5-9c87a61d28a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01177421s
May 31 19:21:25.699: INFO: Pod "pod-d82f26c6-977f-4be8-86a5-9c87a61d28a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020494029s
STEP: Saw pod success
May 31 19:21:25.699: INFO: Pod "pod-d82f26c6-977f-4be8-86a5-9c87a61d28a4" satisfied condition "Succeeded or Failed"
May 31 19:21:25.707: INFO: Trying to get logs from node kind-worker2 pod pod-d82f26c6-977f-4be8-86a5-9c87a61d28a4 container test-container: <nil>
STEP: delete the pod
May 31 19:21:25.734: INFO: Waiting for pod pod-d82f26c6-977f-4be8-86a5-9c87a61d28a4 to disappear
May 31 19:21:25.737: INFO: Pod pod-d82f26c6-977f-4be8-86a5-9c87a61d28a4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
May 31 19:21:25.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4164" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":247,"skipped":4070,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  test/e2e/framework/framework.go:175
May 31 19:21:32.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7326" for this suite.
STEP: Destroying namespace "webhook-7326-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":292,"completed":248,"skipped":4090,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 60 lines ...
May 31 19:21:41.958: INFO: stderr: ""
May 31 19:21:41.958: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
May 31 19:21:41.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-745" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":292,"completed":249,"skipped":4092,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
May 31 19:21:42.042: INFO: Asynchronously running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl kubectl --server=https://127.0.0.1:42753 --kubeconfig=/root/.kube/kind-test-config proxy --unix-socket=/tmp/kubectl-proxy-unix128652020/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
May 31 19:21:42.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5458" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":292,"completed":250,"skipped":4129,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 14 lines ...
May 31 19:21:47.391: INFO: Trying to dial the pod
May 31 19:21:52.411: INFO: Controller my-hostname-basic-5593cb49-7b3b-439d-8abf-ccc8909e29da: Got expected result from replica 1 [my-hostname-basic-5593cb49-7b3b-439d-8abf-ccc8909e29da-gbltf]: "my-hostname-basic-5593cb49-7b3b-439d-8abf-ccc8909e29da-gbltf", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
May 31 19:21:52.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2907" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":292,"completed":251,"skipped":4140,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a volume subpath [sig-storage] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
May 31 19:21:52.423: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a volume subpath [sig-storage] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in volume subpath
May 31 19:21:52.470: INFO: Waiting up to 5m0s for pod "var-expansion-7a7e4b4f-0539-478b-b85d-185644d63de4" in namespace "var-expansion-9345" to be "Succeeded or Failed"
May 31 19:21:52.473: INFO: Pod "var-expansion-7a7e4b4f-0539-478b-b85d-185644d63de4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.363737ms
May 31 19:21:54.478: INFO: Pod "var-expansion-7a7e4b4f-0539-478b-b85d-185644d63de4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007782655s
May 31 19:21:56.492: INFO: Pod "var-expansion-7a7e4b4f-0539-478b-b85d-185644d63de4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022187398s
STEP: Saw pod success
May 31 19:21:56.494: INFO: Pod "var-expansion-7a7e4b4f-0539-478b-b85d-185644d63de4" satisfied condition "Succeeded or Failed"
May 31 19:21:56.499: INFO: Trying to get logs from node kind-worker pod var-expansion-7a7e4b4f-0539-478b-b85d-185644d63de4 container dapi-container: <nil>
STEP: delete the pod
May 31 19:21:56.543: INFO: Waiting for pod var-expansion-7a7e4b4f-0539-478b-b85d-185644d63de4 to disappear
May 31 19:21:56.548: INFO: Pod var-expansion-7a7e4b4f-0539-478b-b85d-185644d63de4 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
May 31 19:21:56.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9345" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":292,"completed":252,"skipped":4144,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
May 31 19:21:56.615: INFO: Waiting up to 5m0s for pod "busybox-user-65534-90cf68ce-3bc0-469d-9bc6-5129cd537ea5" in namespace "security-context-test-7434" to be "Succeeded or Failed"
May 31 19:21:56.624: INFO: Pod "busybox-user-65534-90cf68ce-3bc0-469d-9bc6-5129cd537ea5": Phase="Pending", Reason="", readiness=false. Elapsed: 9.525426ms
May 31 19:21:58.636: INFO: Pod "busybox-user-65534-90cf68ce-3bc0-469d-9bc6-5129cd537ea5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021329478s
May 31 19:22:00.642: INFO: Pod "busybox-user-65534-90cf68ce-3bc0-469d-9bc6-5129cd537ea5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027542192s
May 31 19:22:00.642: INFO: Pod "busybox-user-65534-90cf68ce-3bc0-469d-9bc6-5129cd537ea5" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
May 31 19:22:00.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7434" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":253,"skipped":4167,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-f7e1a201-7ad4-4fa9-96e6-3b15ba015acf
STEP: Creating a pod to test consume configMaps
May 31 19:22:00.707: INFO: Waiting up to 5m0s for pod "pod-configmaps-eb48f885-6ee2-45f6-8f0a-eba1690c5f6e" in namespace "configmap-2754" to be "Succeeded or Failed"
May 31 19:22:00.710: INFO: Pod "pod-configmaps-eb48f885-6ee2-45f6-8f0a-eba1690c5f6e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.260268ms
May 31 19:22:02.715: INFO: Pod "pod-configmaps-eb48f885-6ee2-45f6-8f0a-eba1690c5f6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007640578s
May 31 19:22:04.731: INFO: Pod "pod-configmaps-eb48f885-6ee2-45f6-8f0a-eba1690c5f6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023950817s
STEP: Saw pod success
May 31 19:22:04.731: INFO: Pod "pod-configmaps-eb48f885-6ee2-45f6-8f0a-eba1690c5f6e" satisfied condition "Succeeded or Failed"
May 31 19:22:04.737: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-eb48f885-6ee2-45f6-8f0a-eba1690c5f6e container configmap-volume-test: <nil>
STEP: delete the pod
May 31 19:22:04.803: INFO: Waiting for pod pod-configmaps-eb48f885-6ee2-45f6-8f0a-eba1690c5f6e to disappear
May 31 19:22:04.806: INFO: Pod pod-configmaps-eb48f885-6ee2-45f6-8f0a-eba1690c5f6e no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
May 31 19:22:04.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2754" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":292,"completed":254,"skipped":4175,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
S
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
May 31 19:22:05.114: INFO: stderr: ""
May 31 19:22:05.115: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
May 31 19:22:05.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-371" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":292,"completed":255,"skipped":4176,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should run through the lifecycle of a ServiceAccount [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 10 lines ...
STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector)
STEP: deleting the ServiceAccount
[AfterEach] [sig-auth] ServiceAccounts
  test/e2e/framework/framework.go:175
May 31 19:22:05.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-3603" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":292,"completed":256,"skipped":4188,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 24 lines ...
May 31 19:22:05.932: INFO: created pod pod-service-account-nomountsa-nomountspec
May 31 19:22:05.932: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  test/e2e/framework/framework.go:175
May 31 19:22:05.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-7666" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":292,"completed":257,"skipped":4208,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
May 31 19:22:12.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-861" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":292,"completed":258,"skipped":4251,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 31 19:22:12.172: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name secret-emptykey-test-cfe9dd26-fb30-43f1-9311-52f958efb42f
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
May 31 19:22:12.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9773" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":292,"completed":259,"skipped":4281,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should patch a Namespace [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 9 lines ...
STEP: get the Namespace and ensuring it has the label
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  test/e2e/framework/framework.go:175
May 31 19:22:12.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-3680" for this suite.
STEP: Destroying namespace "nspatchtest-c2f3fd9a-2560-43ef-8eb0-918b781c47e5-4175" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":292,"completed":260,"skipped":4302,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 12 lines ...
STEP: Creating secret with name s-test-opt-create-e7c47a1f-eb62-40eb-955d-079c52281801
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
May 31 19:22:20.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5978" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":261,"skipped":4322,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-f5cc2cb7-083a-4067-8030-2bd615afce5a
STEP: Creating a pod to test consume configMaps
May 31 19:22:20.792: INFO: Waiting up to 5m0s for pod "pod-configmaps-ef46c5e3-c571-46a2-bd9f-1d2f4eca2745" in namespace "configmap-7288" to be "Succeeded or Failed"
May 31 19:22:20.795: INFO: Pod "pod-configmaps-ef46c5e3-c571-46a2-bd9f-1d2f4eca2745": Phase="Pending", Reason="", readiness=false. Elapsed: 2.755197ms
May 31 19:22:22.810: INFO: Pod "pod-configmaps-ef46c5e3-c571-46a2-bd9f-1d2f4eca2745": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018139156s
May 31 19:22:24.818: INFO: Pod "pod-configmaps-ef46c5e3-c571-46a2-bd9f-1d2f4eca2745": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026207483s
STEP: Saw pod success
May 31 19:22:24.818: INFO: Pod "pod-configmaps-ef46c5e3-c571-46a2-bd9f-1d2f4eca2745" satisfied condition "Succeeded or Failed"
May 31 19:22:24.823: INFO: Trying to get logs from node kind-worker pod pod-configmaps-ef46c5e3-c571-46a2-bd9f-1d2f4eca2745 container configmap-volume-test: <nil>
STEP: delete the pod
May 31 19:22:24.847: INFO: Waiting for pod pod-configmaps-ef46c5e3-c571-46a2-bd9f-1d2f4eca2745 to disappear
May 31 19:22:24.851: INFO: Pod pod-configmaps-ef46c5e3-c571-46a2-bd9f-1d2f4eca2745 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
May 31 19:22:24.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7288" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":292,"completed":262,"skipped":4353,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
S
------------------------------
[sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath 
  runs ReplicaSets to verify preemption running path [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 32 lines ...
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  test/e2e/framework/framework.go:175
May 31 19:24:08.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-preemption-7954" for this suite.
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  test/e2e/scheduling/preemption.go:75
•{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":292,"completed":263,"skipped":4354,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 31 19:24:08.276: INFO: Waiting up to 5m0s for pod "downwardapi-volume-207f564d-703b-40c9-8daa-e3fe470169bf" in namespace "projected-468" to be "Succeeded or Failed"
May 31 19:24:08.281: INFO: Pod "downwardapi-volume-207f564d-703b-40c9-8daa-e3fe470169bf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.36975ms
May 31 19:24:10.292: INFO: Pod "downwardapi-volume-207f564d-703b-40c9-8daa-e3fe470169bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015895158s
May 31 19:24:12.304: INFO: Pod "downwardapi-volume-207f564d-703b-40c9-8daa-e3fe470169bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027841218s
STEP: Saw pod success
May 31 19:24:12.304: INFO: Pod "downwardapi-volume-207f564d-703b-40c9-8daa-e3fe470169bf" satisfied condition "Succeeded or Failed"
May 31 19:24:12.311: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-207f564d-703b-40c9-8daa-e3fe470169bf container client-container: <nil>
STEP: delete the pod
May 31 19:24:12.363: INFO: Waiting for pod downwardapi-volume-207f564d-703b-40c9-8daa-e3fe470169bf to disappear
May 31 19:24:12.372: INFO: Pod downwardapi-volume-207f564d-703b-40c9-8daa-e3fe470169bf no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
May 31 19:24:12.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-468" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":264,"skipped":4375,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}

------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
May 31 19:24:12.438: INFO: >>> kubeConfig: /root/.kube/kind-test-config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
May 31 19:24:12.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7541" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":292,"completed":265,"skipped":4375,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
May 31 19:24:13.003: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on tmpfs
May 31 19:24:13.051: INFO: Waiting up to 5m0s for pod "pod-b3525446-1bff-436d-a2ac-96a9a088801e" in namespace "emptydir-512" to be "Succeeded or Failed"
May 31 19:24:13.055: INFO: Pod "pod-b3525446-1bff-436d-a2ac-96a9a088801e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.145003ms
May 31 19:24:15.062: INFO: Pod "pod-b3525446-1bff-436d-a2ac-96a9a088801e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010728132s
May 31 19:24:17.066: INFO: Pod "pod-b3525446-1bff-436d-a2ac-96a9a088801e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014992238s
STEP: Saw pod success
May 31 19:24:17.066: INFO: Pod "pod-b3525446-1bff-436d-a2ac-96a9a088801e" satisfied condition "Succeeded or Failed"
May 31 19:24:17.072: INFO: Trying to get logs from node kind-worker pod pod-b3525446-1bff-436d-a2ac-96a9a088801e container test-container: <nil>
STEP: delete the pod
May 31 19:24:17.094: INFO: Waiting for pod pod-b3525446-1bff-436d-a2ac-96a9a088801e to disappear
May 31 19:24:17.099: INFO: Pod pod-b3525446-1bff-436d-a2ac-96a9a088801e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
May 31 19:24:17.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-512" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":266,"skipped":4390,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}

------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 13 lines ...
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
May 31 19:24:34.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7981" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":292,"completed":267,"skipped":4390,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
May 31 19:24:34.231: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir volume type on tmpfs
May 31 19:24:34.282: INFO: Waiting up to 5m0s for pod "pod-415934cd-97b8-4fad-b438-87503e84141a" in namespace "emptydir-968" to be "Succeeded or Failed"
May 31 19:24:34.284: INFO: Pod "pod-415934cd-97b8-4fad-b438-87503e84141a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.695085ms
May 31 19:24:36.296: INFO: Pod "pod-415934cd-97b8-4fad-b438-87503e84141a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013709692s
May 31 19:24:38.300: INFO: Pod "pod-415934cd-97b8-4fad-b438-87503e84141a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018597938s
STEP: Saw pod success
May 31 19:24:38.300: INFO: Pod "pod-415934cd-97b8-4fad-b438-87503e84141a" satisfied condition "Succeeded or Failed"
May 31 19:24:38.307: INFO: Trying to get logs from node kind-worker2 pod pod-415934cd-97b8-4fad-b438-87503e84141a container test-container: <nil>
STEP: delete the pod
May 31 19:24:38.344: INFO: Waiting for pod pod-415934cd-97b8-4fad-b438-87503e84141a to disappear
May 31 19:24:38.352: INFO: Pod pod-415934cd-97b8-4fad-b438-87503e84141a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
May 31 19:24:38.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-968" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":268,"skipped":4398,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap configmap-3945/configmap-test-ea522fcb-aaa6-455b-99c9-11302f596c1b
STEP: Creating a pod to test consume configMaps
May 31 19:24:38.423: INFO: Waiting up to 5m0s for pod "pod-configmaps-f4e67407-ade5-43d8-a59d-f1058bfe56f8" in namespace "configmap-3945" to be "Succeeded or Failed"
May 31 19:24:38.427: INFO: Pod "pod-configmaps-f4e67407-ade5-43d8-a59d-f1058bfe56f8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.545629ms
May 31 19:24:40.436: INFO: Pod "pod-configmaps-f4e67407-ade5-43d8-a59d-f1058bfe56f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012612124s
May 31 19:24:42.442: INFO: Pod "pod-configmaps-f4e67407-ade5-43d8-a59d-f1058bfe56f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018911929s
STEP: Saw pod success
May 31 19:24:42.442: INFO: Pod "pod-configmaps-f4e67407-ade5-43d8-a59d-f1058bfe56f8" satisfied condition "Succeeded or Failed"
May 31 19:24:42.448: INFO: Trying to get logs from node kind-worker pod pod-configmaps-f4e67407-ade5-43d8-a59d-f1058bfe56f8 container env-test: <nil>
STEP: delete the pod
May 31 19:24:42.483: INFO: Waiting for pod pod-configmaps-f4e67407-ade5-43d8-a59d-f1058bfe56f8 to disappear
May 31 19:24:42.488: INFO: Pod pod-configmaps-f4e67407-ade5-43d8-a59d-f1058bfe56f8 no longer exists
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:175
May 31 19:24:42.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3945" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":292,"completed":269,"skipped":4403,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 14 lines ...
STEP: verifying the updated pod is in kubernetes
May 31 19:24:47.134: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
May 31 19:24:47.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4136" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":292,"completed":270,"skipped":4436,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 67 lines ...
May 31 19:25:06.835: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4526/pods","resourceVersion":"33539"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
May 31 19:25:06.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4526" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":292,"completed":271,"skipped":4458,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-56998539-77fc-4146-8ce5-de48fa4f1f9b
STEP: Creating a pod to test consume secrets
May 31 19:25:06.927: INFO: Waiting up to 5m0s for pod "pod-secrets-aaeafe89-f40d-4906-93d1-4c625052bdd1" in namespace "secrets-6585" to be "Succeeded or Failed"
May 31 19:25:06.931: INFO: Pod "pod-secrets-aaeafe89-f40d-4906-93d1-4c625052bdd1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.460466ms
May 31 19:25:08.935: INFO: Pod "pod-secrets-aaeafe89-f40d-4906-93d1-4c625052bdd1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008108632s
May 31 19:25:10.947: INFO: Pod "pod-secrets-aaeafe89-f40d-4906-93d1-4c625052bdd1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020154871s
STEP: Saw pod success
May 31 19:25:10.948: INFO: Pod "pod-secrets-aaeafe89-f40d-4906-93d1-4c625052bdd1" satisfied condition "Succeeded or Failed"
May 31 19:25:10.956: INFO: Trying to get logs from node kind-worker2 pod pod-secrets-aaeafe89-f40d-4906-93d1-4c625052bdd1 container secret-volume-test: <nil>
STEP: delete the pod
May 31 19:25:10.996: INFO: Waiting for pod pod-secrets-aaeafe89-f40d-4906-93d1-4c625052bdd1 to disappear
May 31 19:25:11.002: INFO: Pod pod-secrets-aaeafe89-f40d-4906-93d1-4c625052bdd1 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
May 31 19:25:11.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6585" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":292,"completed":272,"skipped":4461,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 190 lines ...
May 31 19:25:23.567: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May 31 19:25:23.567: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
May 31 19:25:23.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6796" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":292,"completed":273,"skipped":4467,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
May 31 19:25:23.668: INFO: >>> kubeConfig: /root/.kube/kind-test-config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
May 31 19:25:25.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9435" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":292,"completed":274,"skipped":4479,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 100 lines ...
May 31 19:26:50.425: INFO: Waiting for statefulset status.replicas updated to 0
May 31 19:26:50.431: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
May 31 19:26:50.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7922" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":292,"completed":275,"skipped":4487,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 39 lines ...
May 31 19:28:10.877: INFO: Waiting for statefulset status.replicas updated to 0
May 31 19:28:10.884: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
May 31 19:28:10.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2900" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":292,"completed":276,"skipped":4520,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 71 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
May 31 19:29:06.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9597" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":292,"completed":277,"skipped":4534,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}

------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Aggregator
... skipping 21 lines ...
[AfterEach] [sig-api-machinery] Aggregator
  test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  test/e2e/framework/framework.go:175
May 31 19:29:31.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-4306" for this suite.
•{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":292,"completed":278,"skipped":4534,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 8 lines ...
STEP: Creating secret with name s-test-opt-upd-f41bb733-3738-4ef2-9974-ba2c73f94a18
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-32ad8fb7-c2ea-4e9c-b227-63fa1ba76d42
STEP: Updating secret s-test-opt-upd-f41bb733-3738-4ef2-9974-ba2c73f94a18
STEP: Creating secret with name s-test-opt-create-caeef521-61ee-4c15-be7d-b7e2230e2bdc
STEP: waiting to observe update in volume
{"component":"entrypoint","file":"prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","time":"2020-05-31T19:30:08Z"}
{"component":"entrypoint","file":"prow/entrypoint/run.go:245","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15s grace period","time":"2020-05-31T19:30:23Z"}