This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2020-06-01 01:17
Elapsed2h0m
Revisionmaster
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/ca6d55c6-a301-4545-bced-f553350398bf/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/ca6d55c6-a301-4545-bced-f553350398bf/targets/test

No Test Failures!


Error lines from build-log.txt

... skipping 68 lines ...
Analyzing: 4 targets (20 packages loaded, 27 targets configured)
Analyzing: 4 targets (446 packages loaded, 7765 targets configured)
Analyzing: 4 targets (1231 packages loaded, 10479 targets configured)
Analyzing: 4 targets (2268 packages loaded, 15447 targets configured)
Analyzing: 4 targets (2268 packages loaded, 15447 targets configured)
Analyzing: 4 targets (2269 packages loaded, 15447 targets configured)
DEBUG: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/bazel_gazelle/internal/go_repository.bzl:184:13: org_golang_x_tools: gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go:2:43: expected 'package', found 'EOF'
gazelle: found packages conversions (conversions.go) and server (issue29198.go) in /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/internal/gccgoimporter/testdata
gazelle: found packages p (p.go) and issue25301 (issue25301.go) in /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/internal/gcimporter/testdata
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go:1:34: expected 'package', found 'EOF'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go:1:16: expected ';', found '.'
gazelle: finding module path for import domain.name/importdecl: exit status 1: can't load package: package domain.name/importdecl: cannot find module providing package domain.name/importdecl
gazelle: finding module path for import old.com/one: exit status 1: can't load package: package old.com/one: cannot find module providing package old.com/one
gazelle: finding module path for import titanic.biz/bar: exit status 1: can't load package: package titanic.biz/bar: cannot find module providing package titanic.biz/bar
gazelle: finding module path for import titanic.biz/foo: exit status 1: can't load package: package titanic.biz/foo: cannot find module providing package titanic.biz/foo
gazelle: finding module path for import fruit.io/pear: exit status 1: can't load package: package fruit.io/pear: cannot find module providing package fruit.io/pear
gazelle: finding module path for import fruit.io/banana: exit status 1: can't load package: package fruit.io/banana: cannot find module providing package fruit.io/banana
... skipping 156 lines ...
WARNING: Waiting for server process to terminate (waited 10 seconds, waiting at most 60)
WARNING: Waiting for server process to terminate (waited 30 seconds, waiting at most 60)
INFO: Waited 60 seconds for server process (pid=5872) to terminate.
WARNING: Waiting for server process to terminate (waited 5 seconds, waiting at most 10)
WARNING: Waiting for server process to terminate (waited 10 seconds, waiting at most 10)
INFO: Waited 10 seconds for server process (pid=5872) to terminate.
FATAL: Attempted to kill stale server process (pid=5872) using SIGKILL, but it did not die in a timely fashion.
+ true
+ pkill ^bazel
+ true
+ mkdir -p _output/bin/
+ cp bazel-bin/test/e2e/e2e.test _output/bin/
+ find /home/prow/go/src/k8s.io/kubernetes/bazel-bin/ -name kubectl -type f
... skipping 46 lines ...
localAPIEndpoint:
  advertiseAddress: 172.18.0.3
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.3
---
apiVersion: kubeadm.k8s.io/v1beta2
controlPlane:
  localAPIEndpoint:
    advertiseAddress: 172.18.0.3
... skipping 4 lines ...
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.3
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
... skipping 32 lines ...
localAPIEndpoint:
  advertiseAddress: 172.18.0.2
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.2
---
apiVersion: kubeadm.k8s.io/v1beta2
discovery:
  bootstrapToken:
    apiServerEndpoint: kind-control-plane:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.2
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
... skipping 32 lines ...
localAPIEndpoint:
  advertiseAddress: 172.18.0.4
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.4
---
apiVersion: kubeadm.k8s.io/v1beta2
discovery:
  bootstrapToken:
    apiServerEndpoint: kind-control-plane:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.4
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
... skipping 37 lines ...
I0601 01:24:54.115587     219 checks.go:376] validating the presence of executable ebtables
I0601 01:24:54.115992     219 checks.go:376] validating the presence of executable ethtool
I0601 01:24:54.116065     219 checks.go:376] validating the presence of executable socat
I0601 01:24:54.116177     219 checks.go:376] validating the presence of executable tc
I0601 01:24:54.116223     219 checks.go:376] validating the presence of executable touch
I0601 01:24:54.116348     219 checks.go:520] running all checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_PIDS: enabled
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0601 01:24:54.130525     219 checks.go:406] checking whether the given node name is reachable using net.LookupHost
I0601 01:24:54.151785     219 checks.go:618] validating kubelet version
I0601 01:24:54.344848     219 checks.go:128] validating if the "kubelet" service is enabled and active
I0601 01:24:54.371041     219 checks.go:201] validating availability of port 10250
I0601 01:24:54.371148     219 checks.go:201] validating availability of port 2379
[preflight] Pulling images required for setting up a Kubernetes cluster
... skipping 86 lines ...
I0601 01:25:08.108098     219 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 12 milliseconds
I0601 01:25:08.608951     219 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 13 milliseconds
I0601 01:25:09.111759     219 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 13 milliseconds
I0601 01:25:09.609638     219 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 12 milliseconds
I0601 01:25:10.109228     219 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 13 milliseconds
I0601 01:25:10.620135     219 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 24 milliseconds
I0601 01:25:19.786045     219 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 8690 milliseconds
I0601 01:25:20.097585     219 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 2 milliseconds
I0601 01:25:20.597386     219 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 1 milliseconds
I0601 01:25:21.097087     219 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 1 milliseconds
I0601 01:25:21.597829     219 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 200 OK in 2 milliseconds
I0601 01:25:21.597948     219 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
[apiclient] All control plane components are healthy after 21.032314 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0601 01:25:21.604359     219 round_trippers.go:443] POST https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 4 milliseconds
I0601 01:25:21.608901     219 round_trippers.go:443] POST https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 3 milliseconds
... skipping 109 lines ...
I0601 01:25:31.149311     573 checks.go:376] validating the presence of executable ebtables
I0601 01:25:31.149346     573 checks.go:376] validating the presence of executable ethtool
I0601 01:25:31.149422     573 checks.go:376] validating the presence of executable socat
I0601 01:25:31.149448     573 checks.go:376] validating the presence of executable tc
I0601 01:25:31.149466     573 checks.go:376] validating the presence of executable touch
I0601 01:25:31.149500     573 checks.go:520] running all checks
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0601 01:25:31.157938     573 checks.go:406] checking whether the given node name is reachable using net.LookupHost
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
... skipping 81 lines ...
I0601 01:25:31.125554     564 checks.go:376] validating the presence of executable ebtables
I0601 01:25:31.125583     564 checks.go:376] validating the presence of executable ethtool
I0601 01:25:31.125627     564 checks.go:376] validating the presence of executable socat
I0601 01:25:31.125677     564 checks.go:376] validating the presence of executable tc
I0601 01:25:31.125710     564 checks.go:376] validating the presence of executable touch
I0601 01:25:31.125819     564 checks.go:520] running all checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_PIDS: enabled
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0601 01:25:31.140575     564 checks.go:406] checking whether the given node name is reachable using net.LookupHost
I0601 01:25:31.157937     564 checks.go:618] validating kubelet version
I0601 01:25:31.351471     564 checks.go:128] validating if the "kubelet" service is enabled and active
I0601 01:25:31.385146     564 checks.go:201] validating availability of port 10250
I0601 01:25:31.385459     564 checks.go:286] validating the existence of file /etc/kubernetes/pki/ca.crt
I0601 01:25:31.385495     564 checks.go:432] validating if the connectivity type is via proxy or direct
... skipping 74 lines ...
+ GINKGO_PID=11808
+ wait 11808
+ ./hack/ginkgo-e2e.sh --provider=skeleton --num-nodes=2 --ginkgo.focus=\[Conformance\] --ginkgo.skip= --report-dir=/logs/artifacts --disable-log-dump=true
Conformance test: not doing test setup.
I0601 01:26:13.083221   12159 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
I0601 01:26:13.083358   12159 e2e.go:129] Starting e2e run "1e630076-054d-4653-ab34-c4ab3075c580" on Ginkgo node 1
{"msg":"Test Suite starting","total":292,"completed":0,"skipped":0,"failed":0}
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1590974771 - Will randomize all specs
Will run 292 of 5101 specs

Jun  1 01:26:13.107: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jun  1 01:26:13.114: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Jun  1 01:26:13.134: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Jun  1 01:26:13.178: INFO: The status of Pod coredns-66bff467f8-t5tnn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun  1 01:26:13.178: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Jun  1 01:26:13.178: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
Jun  1 01:26:13.178: INFO: POD                       NODE         PHASE    GRACE  CONDITIONS
Jun  1 01:26:13.178: INFO: coredns-66bff467f8-t5tnn  kind-worker  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 01:26:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 01:26:09 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 01:26:09 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 01:26:09 +0000 UTC  }]
Jun  1 01:26:13.178: INFO: 
Jun  1 01:26:15.193: INFO: The status of Pod coredns-66bff467f8-t5tnn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun  1 01:26:15.193: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (2 seconds elapsed)
Jun  1 01:26:15.193: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
Jun  1 01:26:15.193: INFO: POD                       NODE         PHASE    GRACE  CONDITIONS
Jun  1 01:26:15.193: INFO: coredns-66bff467f8-t5tnn  kind-worker  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 01:26:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 01:26:09 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 01:26:09 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 01:26:09 +0000 UTC  }]
Jun  1 01:26:15.193: INFO: 
Jun  1 01:26:17.195: INFO: The status of Pod coredns-66bff467f8-t5tnn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun  1 01:26:17.195: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (4 seconds elapsed)
Jun  1 01:26:17.195: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
Jun  1 01:26:17.195: INFO: POD                       NODE         PHASE    GRACE  CONDITIONS
Jun  1 01:26:17.195: INFO: coredns-66bff467f8-t5tnn  kind-worker  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 01:26:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 01:26:09 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 01:26:09 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 01:26:09 +0000 UTC  }]
Jun  1 01:26:17.195: INFO: 
Jun  1 01:26:19.191: INFO: The status of Pod coredns-66bff467f8-t5tnn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun  1 01:26:19.191: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (6 seconds elapsed)
Jun  1 01:26:19.191: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
Jun  1 01:26:19.191: INFO: POD                       NODE         PHASE    GRACE  CONDITIONS
Jun  1 01:26:19.191: INFO: coredns-66bff467f8-t5tnn  kind-worker  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 01:26:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 01:26:09 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 01:26:09 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 01:26:09 +0000 UTC  }]
Jun  1 01:26:19.191: INFO: 
Jun  1 01:26:21.193: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (8 seconds elapsed)
... skipping 18 lines ...
Jun  1 01:26:21.231: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap configmap-2466/configmap-test-df4f872f-4517-4208-b1cd-d5529de0ba2d
STEP: Creating a pod to test consume configMaps
Jun  1 01:26:21.243: INFO: Waiting up to 5m0s for pod "pod-configmaps-4ed8b30b-fcce-46a1-a1fc-a029417c14f0" in namespace "configmap-2466" to be "Succeeded or Failed"
Jun  1 01:26:21.245: INFO: Pod "pod-configmaps-4ed8b30b-fcce-46a1-a1fc-a029417c14f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218519ms
Jun  1 01:26:23.248: INFO: Pod "pod-configmaps-4ed8b30b-fcce-46a1-a1fc-a029417c14f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005570458s
Jun  1 01:26:25.252: INFO: Pod "pod-configmaps-4ed8b30b-fcce-46a1-a1fc-a029417c14f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009214781s
STEP: Saw pod success
Jun  1 01:26:25.252: INFO: Pod "pod-configmaps-4ed8b30b-fcce-46a1-a1fc-a029417c14f0" satisfied condition "Succeeded or Failed"
Jun  1 01:26:25.255: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-4ed8b30b-fcce-46a1-a1fc-a029417c14f0 container env-test: <nil>
STEP: delete the pod
Jun  1 01:26:25.277: INFO: Waiting for pod pod-configmaps-4ed8b30b-fcce-46a1-a1fc-a029417c14f0 to disappear
Jun  1 01:26:25.279: INFO: Pod pod-configmaps-4ed8b30b-fcce-46a1-a1fc-a029417c14f0 no longer exists
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 01:26:25.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2466" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":292,"completed":1,"skipped":8,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 17 lines ...
Jun  1 01:26:45.350: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jun  1 01:26:45.354: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
Jun  1 01:26:45.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7196" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":292,"completed":2,"skipped":32,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 25 lines ...
  test/e2e/framework/framework.go:175
Jun  1 01:26:51.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5083" for this suite.
STEP: Destroying namespace "webhook-5083-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":292,"completed":3,"skipped":34,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-4a244277-bb1d-4e4e-99ad-6bed115b5a93
STEP: Creating a pod to test consume configMaps
Jun  1 01:26:51.736: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bfae66e5-e6a4-4a26-a013-983bcd37e5cb" in namespace "projected-3442" to be "Succeeded or Failed"
Jun  1 01:26:51.739: INFO: Pod "pod-projected-configmaps-bfae66e5-e6a4-4a26-a013-983bcd37e5cb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.232465ms
Jun  1 01:26:53.745: INFO: Pod "pod-projected-configmaps-bfae66e5-e6a4-4a26-a013-983bcd37e5cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009221503s
STEP: Saw pod success
Jun  1 01:26:53.745: INFO: Pod "pod-projected-configmaps-bfae66e5-e6a4-4a26-a013-983bcd37e5cb" satisfied condition "Succeeded or Failed"
Jun  1 01:26:53.749: INFO: Trying to get logs from node kind-worker2 pod pod-projected-configmaps-bfae66e5-e6a4-4a26-a013-983bcd37e5cb container projected-configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 01:26:53.769: INFO: Waiting for pod pod-projected-configmaps-bfae66e5-e6a4-4a26-a013-983bcd37e5cb to disappear
Jun  1 01:26:53.772: INFO: Pod pod-projected-configmaps-bfae66e5-e6a4-4a26-a013-983bcd37e5cb no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Jun  1 01:26:53.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3442" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":4,"skipped":110,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
... skipping 9 lines ...
STEP: creating the pod
Jun  1 01:26:53.811: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
Jun  1 01:27:02.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-576" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":292,"completed":5,"skipped":125,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-28806758-759a-43f5-8743-0fee8e43e576
STEP: Creating a pod to test consume configMaps
Jun  1 01:27:02.522: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8f23b03c-8738-494d-b091-cc4a9ae5790d" in namespace "projected-2240" to be "Succeeded or Failed"
Jun  1 01:27:02.525: INFO: Pod "pod-projected-configmaps-8f23b03c-8738-494d-b091-cc4a9ae5790d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.158709ms
Jun  1 01:27:04.529: INFO: Pod "pod-projected-configmaps-8f23b03c-8738-494d-b091-cc4a9ae5790d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006684037s
Jun  1 01:27:06.532: INFO: Pod "pod-projected-configmaps-8f23b03c-8738-494d-b091-cc4a9ae5790d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009993131s
STEP: Saw pod success
Jun  1 01:27:06.532: INFO: Pod "pod-projected-configmaps-8f23b03c-8738-494d-b091-cc4a9ae5790d" satisfied condition "Succeeded or Failed"
Jun  1 01:27:06.534: INFO: Trying to get logs from node kind-worker pod pod-projected-configmaps-8f23b03c-8738-494d-b091-cc4a9ae5790d container projected-configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 01:27:06.561: INFO: Waiting for pod pod-projected-configmaps-8f23b03c-8738-494d-b091-cc4a9ae5790d to disappear
Jun  1 01:27:06.563: INFO: Pod pod-projected-configmaps-8f23b03c-8738-494d-b091-cc4a9ae5790d no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Jun  1 01:27:06.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2240" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":292,"completed":6,"skipped":131,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 21 lines ...
Jun  1 01:27:20.649: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
Jun  1 01:27:20.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-6562" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":292,"completed":7,"skipped":190,"failed":0}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 01:27:20.692: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a02be362-3390-431f-84b7-66f0708c431a" in namespace "projected-8288" to be "Succeeded or Failed"
Jun  1 01:27:20.694: INFO: Pod "downwardapi-volume-a02be362-3390-431f-84b7-66f0708c431a": Phase="Pending", Reason="", readiness=false. Elapsed: 1.935877ms
Jun  1 01:27:22.699: INFO: Pod "downwardapi-volume-a02be362-3390-431f-84b7-66f0708c431a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006622566s
Jun  1 01:27:24.703: INFO: Pod "downwardapi-volume-a02be362-3390-431f-84b7-66f0708c431a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010928066s
STEP: Saw pod success
Jun  1 01:27:24.703: INFO: Pod "downwardapi-volume-a02be362-3390-431f-84b7-66f0708c431a" satisfied condition "Succeeded or Failed"
Jun  1 01:27:24.705: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-a02be362-3390-431f-84b7-66f0708c431a container client-container: <nil>
STEP: delete the pod
Jun  1 01:27:24.720: INFO: Waiting for pod downwardapi-volume-a02be362-3390-431f-84b7-66f0708c431a to disappear
Jun  1 01:27:24.723: INFO: Pod downwardapi-volume-a02be362-3390-431f-84b7-66f0708c431a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Jun  1 01:27:24.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8288" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":292,"completed":8,"skipped":192,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 26 lines ...
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jun  1 01:27:48.820: INFO: File wheezy_udp@dns-test-service-3.dns-7477.svc.cluster.local from pod  dns-7477/dns-test-96f8936c-00a1-4cf4-8318-0de2e81b082e contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun  1 01:27:48.823: INFO: File jessie_udp@dns-test-service-3.dns-7477.svc.cluster.local from pod  dns-7477/dns-test-96f8936c-00a1-4cf4-8318-0de2e81b082e contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun  1 01:27:48.823: INFO: Lookups using dns-7477/dns-test-96f8936c-00a1-4cf4-8318-0de2e81b082e failed for: [wheezy_udp@dns-test-service-3.dns-7477.svc.cluster.local jessie_udp@dns-test-service-3.dns-7477.svc.cluster.local]

Jun  1 01:27:53.827: INFO: File wheezy_udp@dns-test-service-3.dns-7477.svc.cluster.local from pod  dns-7477/dns-test-96f8936c-00a1-4cf4-8318-0de2e81b082e contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun  1 01:27:53.832: INFO: File jessie_udp@dns-test-service-3.dns-7477.svc.cluster.local from pod  dns-7477/dns-test-96f8936c-00a1-4cf4-8318-0de2e81b082e contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun  1 01:27:53.832: INFO: Lookups using dns-7477/dns-test-96f8936c-00a1-4cf4-8318-0de2e81b082e failed for: [wheezy_udp@dns-test-service-3.dns-7477.svc.cluster.local jessie_udp@dns-test-service-3.dns-7477.svc.cluster.local]

Jun  1 01:27:58.831: INFO: DNS probes using dns-test-96f8936c-00a1-4cf4-8318-0de2e81b082e succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7477.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7477.svc.cluster.local; sleep 1; done
... skipping 9 lines ...
STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Jun  1 01:28:02.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7477" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":292,"completed":9,"skipped":214,"failed":0}
SSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Jun  1 01:28:02.951: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Jun  1 01:28:03.007: INFO: Waiting up to 5m0s for pod "downward-api-bdda7be2-efc6-4a6b-85d8-0a9e7c3703de" in namespace "downward-api-211" to be "Succeeded or Failed"
Jun  1 01:28:03.015: INFO: Pod "downward-api-bdda7be2-efc6-4a6b-85d8-0a9e7c3703de": Phase="Pending", Reason="", readiness=false. Elapsed: 8.104366ms
Jun  1 01:28:05.019: INFO: Pod "downward-api-bdda7be2-efc6-4a6b-85d8-0a9e7c3703de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012231299s
STEP: Saw pod success
Jun  1 01:28:05.019: INFO: Pod "downward-api-bdda7be2-efc6-4a6b-85d8-0a9e7c3703de" satisfied condition "Succeeded or Failed"
Jun  1 01:28:05.021: INFO: Trying to get logs from node kind-worker2 pod downward-api-bdda7be2-efc6-4a6b-85d8-0a9e7c3703de container dapi-container: <nil>
STEP: delete the pod
Jun  1 01:28:05.040: INFO: Waiting for pod downward-api-bdda7be2-efc6-4a6b-85d8-0a9e7c3703de to disappear
Jun  1 01:28:05.042: INFO: Pod downward-api-bdda7be2-efc6-4a6b-85d8-0a9e7c3703de no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Jun  1 01:28:05.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-211" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":292,"completed":10,"skipped":223,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 01:28:05.080: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e98f5059-60e1-4910-9aee-4910883c12e8" in namespace "projected-3887" to be "Succeeded or Failed"
Jun  1 01:28:05.082: INFO: Pod "downwardapi-volume-e98f5059-60e1-4910-9aee-4910883c12e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012524ms
Jun  1 01:28:07.085: INFO: Pod "downwardapi-volume-e98f5059-60e1-4910-9aee-4910883c12e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005850921s
Jun  1 01:28:09.092: INFO: Pod "downwardapi-volume-e98f5059-60e1-4910-9aee-4910883c12e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012511952s
STEP: Saw pod success
Jun  1 01:28:09.092: INFO: Pod "downwardapi-volume-e98f5059-60e1-4910-9aee-4910883c12e8" satisfied condition "Succeeded or Failed"
Jun  1 01:28:09.095: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-e98f5059-60e1-4910-9aee-4910883c12e8 container client-container: <nil>
STEP: delete the pod
Jun  1 01:28:09.113: INFO: Waiting for pod downwardapi-volume-e98f5059-60e1-4910-9aee-4910883c12e8 to disappear
Jun  1 01:28:09.116: INFO: Pod downwardapi-volume-e98f5059-60e1-4910-9aee-4910883c12e8 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Jun  1 01:28:09.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3887" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":292,"completed":11,"skipped":231,"failed":0}
SS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 64 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
Jun  1 01:28:14.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1148" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":292,"completed":12,"skipped":233,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 132 lines ...
Jun  1 01:28:40.819: INFO: stderr: ""
Jun  1 01:28:40.819: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 01:28:40.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4832" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":292,"completed":13,"skipped":237,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  test/e2e/framework/framework.go:175
Jun  1 01:28:46.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9620" for this suite.
STEP: Destroying namespace "webhook-9620-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":292,"completed":14,"skipped":255,"failed":0}
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-map-025b7aec-f1bb-4bba-a317-e6fe08dd2377
STEP: Creating a pod to test consume secrets
Jun  1 01:28:46.679: INFO: Waiting up to 5m0s for pod "pod-secrets-4ac824c8-58cf-4f27-af27-fb275901f66b" in namespace "secrets-3592" to be "Succeeded or Failed"
Jun  1 01:28:46.685: INFO: Pod "pod-secrets-4ac824c8-58cf-4f27-af27-fb275901f66b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.455897ms
Jun  1 01:28:48.688: INFO: Pod "pod-secrets-4ac824c8-58cf-4f27-af27-fb275901f66b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009135417s
Jun  1 01:28:50.692: INFO: Pod "pod-secrets-4ac824c8-58cf-4f27-af27-fb275901f66b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013363546s
STEP: Saw pod success
Jun  1 01:28:50.692: INFO: Pod "pod-secrets-4ac824c8-58cf-4f27-af27-fb275901f66b" satisfied condition "Succeeded or Failed"
Jun  1 01:28:50.695: INFO: Trying to get logs from node kind-worker2 pod pod-secrets-4ac824c8-58cf-4f27-af27-fb275901f66b container secret-volume-test: <nil>
STEP: delete the pod
Jun  1 01:28:50.712: INFO: Waiting for pod pod-secrets-4ac824c8-58cf-4f27-af27-fb275901f66b to disappear
Jun  1 01:28:50.714: INFO: Pod pod-secrets-4ac824c8-58cf-4f27-af27-fb275901f66b no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Jun  1 01:28:50.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3592" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":292,"completed":15,"skipped":257,"failed":0}
SS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
Jun  1 01:28:54.819: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:28:54.823: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:28:54.845: INFO: Unable to read jessie_udp@dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:28:54.848: INFO: Unable to read jessie_tcp@dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:28:54.851: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:28:54.853: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:28:54.869: INFO: Lookups using dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466 failed for: [wheezy_udp@dns-test-service.dns-9581.svc.cluster.local wheezy_tcp@dns-test-service.dns-9581.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local jessie_udp@dns-test-service.dns-9581.svc.cluster.local jessie_tcp@dns-test-service.dns-9581.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local]

Jun  1 01:28:59.876: INFO: Unable to read wheezy_udp@dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:28:59.879: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:28:59.882: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:28:59.885: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:28:59.904: INFO: Unable to read jessie_udp@dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:28:59.907: INFO: Unable to read jessie_tcp@dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:28:59.909: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:28:59.912: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:28:59.927: INFO: Lookups using dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466 failed for: [wheezy_udp@dns-test-service.dns-9581.svc.cluster.local wheezy_tcp@dns-test-service.dns-9581.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local jessie_udp@dns-test-service.dns-9581.svc.cluster.local jessie_tcp@dns-test-service.dns-9581.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local]

Jun  1 01:29:04.873: INFO: Unable to read wheezy_udp@dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:29:04.877: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:29:04.880: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:29:04.883: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:29:04.920: INFO: Unable to read jessie_udp@dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:29:04.923: INFO: Unable to read jessie_tcp@dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:29:04.925: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:29:04.928: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:29:04.944: INFO: Lookups using dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466 failed for: [wheezy_udp@dns-test-service.dns-9581.svc.cluster.local wheezy_tcp@dns-test-service.dns-9581.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local jessie_udp@dns-test-service.dns-9581.svc.cluster.local jessie_tcp@dns-test-service.dns-9581.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local]

Jun  1 01:29:09.873: INFO: Unable to read wheezy_udp@dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:29:09.877: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:29:09.880: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:29:09.883: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:29:09.910: INFO: Unable to read jessie_udp@dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:29:09.913: INFO: Unable to read jessie_tcp@dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:29:09.916: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:29:09.919: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:29:09.948: INFO: Lookups using dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466 failed for: [wheezy_udp@dns-test-service.dns-9581.svc.cluster.local wheezy_tcp@dns-test-service.dns-9581.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local jessie_udp@dns-test-service.dns-9581.svc.cluster.local jessie_tcp@dns-test-service.dns-9581.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local]

Jun  1 01:29:14.873: INFO: Unable to read wheezy_udp@dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:29:14.877: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:29:14.880: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:29:14.883: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:29:14.903: INFO: Unable to read jessie_udp@dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:29:14.906: INFO: Unable to read jessie_tcp@dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:29:14.909: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:29:14.912: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:29:14.927: INFO: Lookups using dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466 failed for: [wheezy_udp@dns-test-service.dns-9581.svc.cluster.local wheezy_tcp@dns-test-service.dns-9581.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local jessie_udp@dns-test-service.dns-9581.svc.cluster.local jessie_tcp@dns-test-service.dns-9581.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local]

Jun  1 01:29:19.875: INFO: Unable to read wheezy_udp@dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:29:19.878: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:29:19.881: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:29:19.884: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:29:19.903: INFO: Unable to read jessie_udp@dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:29:19.905: INFO: Unable to read jessie_tcp@dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:29:19.908: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:29:19.911: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local from pod dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466: the server could not find the requested resource (get pods dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466)
Jun  1 01:29:19.927: INFO: Lookups using dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466 failed for: [wheezy_udp@dns-test-service.dns-9581.svc.cluster.local wheezy_tcp@dns-test-service.dns-9581.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local jessie_udp@dns-test-service.dns-9581.svc.cluster.local jessie_tcp@dns-test-service.dns-9581.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9581.svc.cluster.local]

Jun  1 01:29:24.930: INFO: DNS probes using dns-9581/dns-test-677f43af-112b-4fa0-92b8-f12e8acd5466 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Jun  1 01:29:25.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9581" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":292,"completed":16,"skipped":259,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-82d2ba97-6923-4e4b-94b3-b8f6f3f0a3d3
STEP: Creating a pod to test consume configMaps
Jun  1 01:29:25.083: INFO: Waiting up to 5m0s for pod "pod-configmaps-175c5a92-0438-4636-b38f-8a7c988a7c82" in namespace "configmap-5401" to be "Succeeded or Failed"
Jun  1 01:29:25.087: INFO: Pod "pod-configmaps-175c5a92-0438-4636-b38f-8a7c988a7c82": Phase="Pending", Reason="", readiness=false. Elapsed: 4.171299ms
Jun  1 01:29:27.091: INFO: Pod "pod-configmaps-175c5a92-0438-4636-b38f-8a7c988a7c82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007685557s
STEP: Saw pod success
Jun  1 01:29:27.091: INFO: Pod "pod-configmaps-175c5a92-0438-4636-b38f-8a7c988a7c82" satisfied condition "Succeeded or Failed"
Jun  1 01:29:27.093: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-175c5a92-0438-4636-b38f-8a7c988a7c82 container configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 01:29:27.127: INFO: Waiting for pod pod-configmaps-175c5a92-0438-4636-b38f-8a7c988a7c82 to disappear
Jun  1 01:29:27.129: INFO: Pod pod-configmaps-175c5a92-0438-4636-b38f-8a7c988a7c82 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 01:29:27.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5401" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":292,"completed":17,"skipped":273,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 01:29:33.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-6576" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/crd_conversion_webhook.go:137
•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":292,"completed":18,"skipped":305,"failed":0}
SSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 2 lines ...
Jun  1 01:29:33.951: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jun  1 01:29:36.021: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Jun  1 01:29:36.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4491" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":292,"completed":19,"skipped":308,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 20 lines ...
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Jun  1 01:30:01.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3610" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":292,"completed":20,"skipped":335,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Jun  1 01:30:17.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6650" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":292,"completed":21,"skipped":341,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 01:30:17.464: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fba8ac32-790c-4af4-8a53-2db93f9234c9" in namespace "downward-api-2545" to be "Succeeded or Failed"
Jun  1 01:30:17.466: INFO: Pod "downwardapi-volume-fba8ac32-790c-4af4-8a53-2db93f9234c9": Phase="Pending", Reason="", readiness=false. Elapsed: 1.986328ms
Jun  1 01:30:19.469: INFO: Pod "downwardapi-volume-fba8ac32-790c-4af4-8a53-2db93f9234c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005516133s
STEP: Saw pod success
Jun  1 01:30:19.470: INFO: Pod "downwardapi-volume-fba8ac32-790c-4af4-8a53-2db93f9234c9" satisfied condition "Succeeded or Failed"
Jun  1 01:30:19.473: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-fba8ac32-790c-4af4-8a53-2db93f9234c9 container client-container: <nil>
STEP: delete the pod
Jun  1 01:30:19.487: INFO: Waiting for pod downwardapi-volume-fba8ac32-790c-4af4-8a53-2db93f9234c9 to disappear
Jun  1 01:30:19.490: INFO: Pod downwardapi-volume-fba8ac32-790c-4af4-8a53-2db93f9234c9 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Jun  1 01:30:19.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2545" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":292,"completed":22,"skipped":352,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-7f4bf6fe-ec84-4756-84f2-d386d3d1b7b8
STEP: Creating a pod to test consume secrets
Jun  1 01:30:19.531: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2b06ec92-e5bf-4b33-b4d7-3172dcabb6e1" in namespace "projected-4450" to be "Succeeded or Failed"
Jun  1 01:30:19.533: INFO: Pod "pod-projected-secrets-2b06ec92-e5bf-4b33-b4d7-3172dcabb6e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031449ms
Jun  1 01:30:21.537: INFO: Pod "pod-projected-secrets-2b06ec92-e5bf-4b33-b4d7-3172dcabb6e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006400237s
STEP: Saw pod success
Jun  1 01:30:21.537: INFO: Pod "pod-projected-secrets-2b06ec92-e5bf-4b33-b4d7-3172dcabb6e1" satisfied condition "Succeeded or Failed"
Jun  1 01:30:21.540: INFO: Trying to get logs from node kind-worker2 pod pod-projected-secrets-2b06ec92-e5bf-4b33-b4d7-3172dcabb6e1 container projected-secret-volume-test: <nil>
STEP: delete the pod
Jun  1 01:30:21.556: INFO: Waiting for pod pod-projected-secrets-2b06ec92-e5bf-4b33-b4d7-3172dcabb6e1 to disappear
Jun  1 01:30:21.560: INFO: Pod pod-projected-secrets-2b06ec92-e5bf-4b33-b4d7-3172dcabb6e1 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Jun  1 01:30:21.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4450" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":292,"completed":23,"skipped":410,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 5 lines ...
[BeforeEach] [sig-network] Services
  test/e2e/network/service.go:809
[It] should serve multiport endpoints from pods  [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating service multi-endpoint-test in namespace services-8632
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8632 to expose endpoints map[]
Jun  1 01:30:21.613: INFO: Get endpoints failed (2.524154ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jun  1 01:30:22.616: INFO: successfully validated that service multi-endpoint-test in namespace services-8632 exposes endpoints map[] (1.005783476s elapsed)
STEP: Creating pod pod1 in namespace services-8632
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8632 to expose endpoints map[pod1:[100]]
Jun  1 01:30:24.647: INFO: successfully validated that service multi-endpoint-test in namespace services-8632 exposes endpoints map[pod1:[100]] (2.020639334s elapsed)
STEP: Creating pod pod2 in namespace services-8632
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8632 to expose endpoints map[pod1:[100] pod2:[101]]
... skipping 7 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 01:30:28.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8632" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":292,"completed":24,"skipped":429,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 19 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Jun  1 01:30:44.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7676" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":292,"completed":25,"skipped":439,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 24 lines ...
Jun  1 01:30:45.463: INFO: created pod pod-service-account-nomountsa-nomountspec
Jun  1 01:30:45.463: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  test/e2e/framework/framework.go:175
Jun  1 01:30:45.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-3529" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":292,"completed":26,"skipped":488,"failed":0}
SSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 9 lines ...
STEP: Updating configmap projected-configmap-test-upd-62fa648a-653e-4a5e-82cb-f460f83259d5
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Jun  1 01:31:53.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1523" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":27,"skipped":492,"failed":0}

------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 01:32:03.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4564" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":292,"completed":28,"skipped":492,"failed":0}
SSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 26 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 01:32:11.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3518" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":292,"completed":29,"skipped":501,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 01:32:11.300: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on node default medium
Jun  1 01:32:11.332: INFO: Waiting up to 5m0s for pod "pod-3d52d908-d881-4982-84e5-ddd0c7d14ed4" in namespace "emptydir-7295" to be "Succeeded or Failed"
Jun  1 01:32:11.336: INFO: Pod "pod-3d52d908-d881-4982-84e5-ddd0c7d14ed4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.822779ms
Jun  1 01:32:13.339: INFO: Pod "pod-3d52d908-d881-4982-84e5-ddd0c7d14ed4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007207889s
Jun  1 01:32:15.343: INFO: Pod "pod-3d52d908-d881-4982-84e5-ddd0c7d14ed4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010521611s
STEP: Saw pod success
Jun  1 01:32:15.343: INFO: Pod "pod-3d52d908-d881-4982-84e5-ddd0c7d14ed4" satisfied condition "Succeeded or Failed"
Jun  1 01:32:15.345: INFO: Trying to get logs from node kind-worker2 pod pod-3d52d908-d881-4982-84e5-ddd0c7d14ed4 container test-container: <nil>
STEP: delete the pod
Jun  1 01:32:15.369: INFO: Waiting for pod pod-3d52d908-d881-4982-84e5-ddd0c7d14ed4 to disappear
Jun  1 01:32:15.372: INFO: Pod pod-3d52d908-d881-4982-84e5-ddd0c7d14ed4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 01:32:15.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7295" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":30,"skipped":520,"failed":0}
SSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] version v1
... skipping 343 lines ...
Jun  1 01:32:25.765: INFO: Deleting ReplicationController proxy-service-pz9mg took: 4.881839ms
Jun  1 01:32:26.065: INFO: Terminating ReplicationController proxy-service-pz9mg pods took: 300.253508ms
[AfterEach] version v1
  test/e2e/framework/framework.go:175
Jun  1 01:32:27.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4179" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":292,"completed":31,"skipped":523,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 45 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 01:32:49.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3295" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":292,"completed":32,"skipped":556,"failed":0}
SSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
... skipping 7 lines ...
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  test/e2e/framework/framework.go:175
Jun  1 01:32:49.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-3541" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":292,"completed":33,"skipped":559,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 9 lines ...
STEP: Creating the pod
Jun  1 01:32:54.146: INFO: Successfully updated pod "labelsupdate092508ec-9829-445c-871e-f077560fe9ff"
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Jun  1 01:32:56.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9353" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":292,"completed":34,"skipped":573,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected combined
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-projected-all-test-volume-61426ee0-e0cc-49c6-b99a-8cfa35101b86
STEP: Creating secret with name secret-projected-all-test-volume-0c1c9e1f-3d06-47b6-b263-4e74cd6e8a5c
STEP: Creating a pod to test Check all projections for projected volume plugin
Jun  1 01:32:56.211: INFO: Waiting up to 5m0s for pod "projected-volume-c4b01424-90d9-4371-840b-d108cafd17e9" in namespace "projected-5662" to be "Succeeded or Failed"
Jun  1 01:32:56.213: INFO: Pod "projected-volume-c4b01424-90d9-4371-840b-d108cafd17e9": Phase="Pending", Reason="", readiness=false. Elapsed: 1.907427ms
Jun  1 01:32:58.217: INFO: Pod "projected-volume-c4b01424-90d9-4371-840b-d108cafd17e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005854058s
Jun  1 01:33:00.222: INFO: Pod "projected-volume-c4b01424-90d9-4371-840b-d108cafd17e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010636256s
STEP: Saw pod success
Jun  1 01:33:00.222: INFO: Pod "projected-volume-c4b01424-90d9-4371-840b-d108cafd17e9" satisfied condition "Succeeded or Failed"
Jun  1 01:33:00.225: INFO: Trying to get logs from node kind-worker pod projected-volume-c4b01424-90d9-4371-840b-d108cafd17e9 container projected-all-volume-test: <nil>
STEP: delete the pod
Jun  1 01:33:00.239: INFO: Waiting for pod projected-volume-c4b01424-90d9-4371-840b-d108cafd17e9 to disappear
Jun  1 01:33:00.241: INFO: Pod projected-volume-c4b01424-90d9-4371-840b-d108cafd17e9 no longer exists
[AfterEach] [sig-storage] Projected combined
  test/e2e/framework/framework.go:175
Jun  1 01:33:00.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5662" for this suite.
•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":292,"completed":35,"skipped":592,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 9 lines ...
[It] should be possible to delete [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Jun  1 01:33:00.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5516" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":292,"completed":36,"skipped":606,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl diff 
  should check if kubectl diff finds a difference for Deployments [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 17 lines ...
Jun  1 01:33:01.924: INFO: stderr: ""
Jun  1 01:33:01.924: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 01:33:01.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9124" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":292,"completed":37,"skipped":625,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 65 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 01:33:19.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2007" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":292,"completed":38,"skipped":639,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  1 01:33:19.301: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename webhook
... skipping 6 lines ...
STEP: Wait for the deployment to be ready
Jun  1 01:33:20.343: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jun  1 01:33:22.352: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726572000, loc:(*time.Location)(0x8006d20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726572000, loc:(*time.Location)(0x8006d20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726572000, loc:(*time.Location)(0x8006d20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726572000, loc:(*time.Location)(0x8006d20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jun  1 01:33:25.373: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  test/e2e/framework/framework.go:597
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 01:33:25.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4877" for this suite.
STEP: Destroying namespace "webhook-4877-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":292,"completed":39,"skipped":674,"failed":0}
SSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] PreStop
... skipping 26 lines ...
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  test/e2e/framework/framework.go:175
Jun  1 01:33:38.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-8979" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":292,"completed":40,"skipped":680,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Jun  1 01:33:38.640: INFO: Asynchronously running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config proxy --unix-socket=/tmp/kubectl-proxy-unix171851754/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 01:33:38.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4174" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":292,"completed":41,"skipped":712,"failed":0}
SSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should run through the lifecycle of a ServiceAccount [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 10 lines ...
STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector)
STEP: deleting the ServiceAccount
[AfterEach] [sig-auth] ServiceAccounts
  test/e2e/framework/framework.go:175
Jun  1 01:33:38.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-7083" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":292,"completed":42,"skipped":717,"failed":0}
SSSS
------------------------------
[sig-node] PodTemplates 
  should run the lifecycle of PodTemplates [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] PodTemplates
... skipping 5 lines ...
[It] should run the lifecycle of PodTemplates [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [sig-node] PodTemplates
  test/e2e/framework/framework.go:175
Jun  1 01:33:38.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-3309" for this suite.
•{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":292,"completed":43,"skipped":721,"failed":0}

------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 22 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 01:33:45.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3376" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":292,"completed":44,"skipped":721,"failed":0}
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-projected-kf5g
STEP: Creating a pod to test atomic-volume-subpath
Jun  1 01:33:45.817: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-kf5g" in namespace "subpath-6937" to be "Succeeded or Failed"
Jun  1 01:33:45.822: INFO: Pod "pod-subpath-test-projected-kf5g": Phase="Pending", Reason="", readiness=false. Elapsed: 5.094478ms
Jun  1 01:33:47.827: INFO: Pod "pod-subpath-test-projected-kf5g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009998207s
Jun  1 01:33:49.831: INFO: Pod "pod-subpath-test-projected-kf5g": Phase="Running", Reason="", readiness=true. Elapsed: 4.01429257s
Jun  1 01:33:51.835: INFO: Pod "pod-subpath-test-projected-kf5g": Phase="Running", Reason="", readiness=true. Elapsed: 6.01846279s
Jun  1 01:33:53.839: INFO: Pod "pod-subpath-test-projected-kf5g": Phase="Running", Reason="", readiness=true. Elapsed: 8.022477172s
Jun  1 01:33:55.847: INFO: Pod "pod-subpath-test-projected-kf5g": Phase="Running", Reason="", readiness=true. Elapsed: 10.030169956s
... skipping 2 lines ...
Jun  1 01:34:01.859: INFO: Pod "pod-subpath-test-projected-kf5g": Phase="Running", Reason="", readiness=true. Elapsed: 16.041640418s
Jun  1 01:34:03.862: INFO: Pod "pod-subpath-test-projected-kf5g": Phase="Running", Reason="", readiness=true. Elapsed: 18.044930333s
Jun  1 01:34:05.868: INFO: Pod "pod-subpath-test-projected-kf5g": Phase="Running", Reason="", readiness=true. Elapsed: 20.050580575s
Jun  1 01:34:07.871: INFO: Pod "pod-subpath-test-projected-kf5g": Phase="Running", Reason="", readiness=true. Elapsed: 22.053925321s
Jun  1 01:34:09.881: INFO: Pod "pod-subpath-test-projected-kf5g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.064230586s
STEP: Saw pod success
Jun  1 01:34:09.881: INFO: Pod "pod-subpath-test-projected-kf5g" satisfied condition "Succeeded or Failed"
Jun  1 01:34:09.886: INFO: Trying to get logs from node kind-worker2 pod pod-subpath-test-projected-kf5g container test-container-subpath-projected-kf5g: <nil>
STEP: delete the pod
Jun  1 01:34:09.900: INFO: Waiting for pod pod-subpath-test-projected-kf5g to disappear
Jun  1 01:34:09.902: INFO: Pod pod-subpath-test-projected-kf5g no longer exists
STEP: Deleting pod pod-subpath-test-projected-kf5g
Jun  1 01:34:09.902: INFO: Deleting pod "pod-subpath-test-projected-kf5g" in namespace "subpath-6937"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Jun  1 01:34:09.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6937" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":292,"completed":45,"skipped":724,"failed":0}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Jun  1 01:34:09.937: INFO: Asynchronously running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 01:34:10.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7052" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":292,"completed":46,"skipped":732,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 39 lines ...
Jun  1 01:34:18.718: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config explain e2e-test-crd-publish-openapi-1204-crds.spec'
Jun  1 01:34:19.292: INFO: stderr: ""
Jun  1 01:34:19.292: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1204-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Jun  1 01:34:19.292: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config explain e2e-test-crd-publish-openapi-1204-crds.spec.bars'
Jun  1 01:34:19.851: INFO: stderr: ""
Jun  1 01:34:19.851: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1204-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Jun  1 01:34:19.852: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config explain e2e-test-crd-publish-openapi-1204-crds.spec.bars2'
Jun  1 01:34:20.360: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 01:34:23.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4117" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":292,"completed":47,"skipped":741,"failed":0}

------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 01:34:23.835: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5b543e5c-7449-4465-a112-e1d0e5a49384" in namespace "projected-5851" to be "Succeeded or Failed"
Jun  1 01:34:23.837: INFO: Pod "downwardapi-volume-5b543e5c-7449-4465-a112-e1d0e5a49384": Phase="Pending", Reason="", readiness=false. Elapsed: 1.981078ms
Jun  1 01:34:25.841: INFO: Pod "downwardapi-volume-5b543e5c-7449-4465-a112-e1d0e5a49384": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005367603s
Jun  1 01:34:27.845: INFO: Pod "downwardapi-volume-5b543e5c-7449-4465-a112-e1d0e5a49384": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009132269s
STEP: Saw pod success
Jun  1 01:34:27.845: INFO: Pod "downwardapi-volume-5b543e5c-7449-4465-a112-e1d0e5a49384" satisfied condition "Succeeded or Failed"
Jun  1 01:34:27.848: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-5b543e5c-7449-4465-a112-e1d0e5a49384 container client-container: <nil>
STEP: delete the pod
Jun  1 01:34:27.861: INFO: Waiting for pod downwardapi-volume-5b543e5c-7449-4465-a112-e1d0e5a49384 to disappear
Jun  1 01:34:27.864: INFO: Pod downwardapi-volume-5b543e5c-7449-4465-a112-e1d0e5a49384 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Jun  1 01:34:27.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5851" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":292,"completed":48,"skipped":741,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 01:34:27.901: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df6e6d60-b7da-4387-a79e-f2858dd13d23" in namespace "projected-5340" to be "Succeeded or Failed"
Jun  1 01:34:27.904: INFO: Pod "downwardapi-volume-df6e6d60-b7da-4387-a79e-f2858dd13d23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.517969ms
Jun  1 01:34:29.908: INFO: Pod "downwardapi-volume-df6e6d60-b7da-4387-a79e-f2858dd13d23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00630294s
Jun  1 01:34:31.912: INFO: Pod "downwardapi-volume-df6e6d60-b7da-4387-a79e-f2858dd13d23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010881077s
STEP: Saw pod success
Jun  1 01:34:31.912: INFO: Pod "downwardapi-volume-df6e6d60-b7da-4387-a79e-f2858dd13d23" satisfied condition "Succeeded or Failed"
Jun  1 01:34:31.915: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-df6e6d60-b7da-4387-a79e-f2858dd13d23 container client-container: <nil>
STEP: delete the pod
Jun  1 01:34:31.930: INFO: Waiting for pod downwardapi-volume-df6e6d60-b7da-4387-a79e-f2858dd13d23 to disappear
Jun  1 01:34:31.933: INFO: Pod downwardapi-volume-df6e6d60-b7da-4387-a79e-f2858dd13d23 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Jun  1 01:34:31.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5340" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":49,"skipped":751,"failed":0}
S
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 38 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
Jun  1 01:34:40.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3901" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":292,"completed":50,"skipped":752,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 27 lines ...
Jun  1 01:35:00.334: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jun  1 01:35:00.479: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
Jun  1 01:35:00.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8982" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":292,"completed":51,"skipped":785,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  test/e2e/framework/framework.go:175
Jun  1 01:35:07.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5193" for this suite.
STEP: Destroying namespace "webhook-5193-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":292,"completed":52,"skipped":805,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  1 01:35:07.297: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  test/e2e/apps/daemon_set.go:134
[It] should retry creating failed daemon pods [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jun  1 01:35:07.365: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun  1 01:35:07.370: INFO: Number of nodes with available pods: 0
Jun  1 01:35:07.370: INFO: Node kind-worker is running more than one daemon pod
... skipping 21 lines ...
Jun  1 01:35:15.375: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun  1 01:35:15.379: INFO: Number of nodes with available pods: 1
Jun  1 01:35:15.379: INFO: Node kind-worker is running more than one daemon pod
Jun  1 01:35:16.375: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun  1 01:35:16.377: INFO: Number of nodes with available pods: 2
Jun  1 01:35:16.377: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jun  1 01:35:16.390: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun  1 01:35:16.398: INFO: Number of nodes with available pods: 1
Jun  1 01:35:16.398: INFO: Node kind-worker2 is running more than one daemon pod
Jun  1 01:35:17.403: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun  1 01:35:17.406: INFO: Number of nodes with available pods: 1
Jun  1 01:35:17.406: INFO: Node kind-worker2 is running more than one daemon pod
Jun  1 01:35:18.402: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun  1 01:35:18.405: INFO: Number of nodes with available pods: 2
Jun  1 01:35:18.405: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4692, will wait for the garbage collector to delete the pods
Jun  1 01:35:18.469: INFO: Deleting DaemonSet.extensions daemon-set took: 6.289225ms
Jun  1 01:35:18.569: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.277196ms
... skipping 4 lines ...
Jun  1 01:35:29.280: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4692/pods","resourceVersion":"4817"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Jun  1 01:35:29.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4692" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":292,"completed":53,"skipped":813,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
... skipping 9 lines ...
STEP: creating the pod
Jun  1 01:35:29.318: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
Jun  1 01:35:33.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5388" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":292,"completed":54,"skipped":826,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicaSet
... skipping 11 lines ...
Jun  1 01:35:36.229: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  test/e2e/framework/framework.go:175
Jun  1 01:35:37.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-6534" for this suite.
•{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":292,"completed":55,"skipped":844,"failed":0}
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 01:35:37.288: INFO: Waiting up to 5m0s for pod "downwardapi-volume-47e423ef-e47a-43c7-b70b-3059365aa689" in namespace "downward-api-3633" to be "Succeeded or Failed"
Jun  1 01:35:37.291: INFO: Pod "downwardapi-volume-47e423ef-e47a-43c7-b70b-3059365aa689": Phase="Pending", Reason="", readiness=false. Elapsed: 2.803869ms
Jun  1 01:35:39.296: INFO: Pod "downwardapi-volume-47e423ef-e47a-43c7-b70b-3059365aa689": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008047884s
STEP: Saw pod success
Jun  1 01:35:39.296: INFO: Pod "downwardapi-volume-47e423ef-e47a-43c7-b70b-3059365aa689" satisfied condition "Succeeded or Failed"
Jun  1 01:35:39.299: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-47e423ef-e47a-43c7-b70b-3059365aa689 container client-container: <nil>
STEP: delete the pod
Jun  1 01:35:39.331: INFO: Waiting for pod downwardapi-volume-47e423ef-e47a-43c7-b70b-3059365aa689 to disappear
Jun  1 01:35:39.333: INFO: Pod downwardapi-volume-47e423ef-e47a-43c7-b70b-3059365aa689 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Jun  1 01:35:39.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3633" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":56,"skipped":850,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 17 lines ...
Jun  1 01:38:09.703: INFO: Restart count of pod container-probe-7078/liveness-c42f1c11-9670-4573-8b55-9563ba5d1448 is now 5 (2m28.311919589s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Jun  1 01:38:09.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7078" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":292,"completed":57,"skipped":864,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 01:38:09.719: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on node default medium
Jun  1 01:38:09.779: INFO: Waiting up to 5m0s for pod "pod-ea4afcc0-c8c1-4a67-a4f0-50864c24d202" in namespace "emptydir-3563" to be "Succeeded or Failed"
Jun  1 01:38:09.781: INFO: Pod "pod-ea4afcc0-c8c1-4a67-a4f0-50864c24d202": Phase="Pending", Reason="", readiness=false. Elapsed: 1.857864ms
Jun  1 01:38:11.785: INFO: Pod "pod-ea4afcc0-c8c1-4a67-a4f0-50864c24d202": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005547954s
STEP: Saw pod success
Jun  1 01:38:11.785: INFO: Pod "pod-ea4afcc0-c8c1-4a67-a4f0-50864c24d202" satisfied condition "Succeeded or Failed"
Jun  1 01:38:11.788: INFO: Trying to get logs from node kind-worker2 pod pod-ea4afcc0-c8c1-4a67-a4f0-50864c24d202 container test-container: <nil>
STEP: delete the pod
Jun  1 01:38:11.814: INFO: Waiting for pod pod-ea4afcc0-c8c1-4a67-a4f0-50864c24d202 to disappear
Jun  1 01:38:11.816: INFO: Pod pod-ea4afcc0-c8c1-4a67-a4f0-50864c24d202 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 01:38:11.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3563" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":58,"skipped":890,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
... skipping 18 lines ...
STEP: Deleting second CR
Jun  1 01:39:02.450: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-01T01:38:22Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-06-01T01:38:42Z]] name:name2 resourceVersion:5665 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:f73b33ac-a503-42bd-a7d5-c930c059110c] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 01:39:12.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-48" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":292,"completed":59,"skipped":900,"failed":0}
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-configmap-htxj
STEP: Creating a pod to test atomic-volume-subpath
Jun  1 01:39:13.007: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-htxj" in namespace "subpath-8017" to be "Succeeded or Failed"
Jun  1 01:39:13.009: INFO: Pod "pod-subpath-test-configmap-htxj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009854ms
Jun  1 01:39:15.013: INFO: Pod "pod-subpath-test-configmap-htxj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005967806s
Jun  1 01:39:17.017: INFO: Pod "pod-subpath-test-configmap-htxj": Phase="Running", Reason="", readiness=true. Elapsed: 4.009867555s
Jun  1 01:39:19.021: INFO: Pod "pod-subpath-test-configmap-htxj": Phase="Running", Reason="", readiness=true. Elapsed: 6.013552334s
Jun  1 01:39:21.025: INFO: Pod "pod-subpath-test-configmap-htxj": Phase="Running", Reason="", readiness=true. Elapsed: 8.018033081s
Jun  1 01:39:23.029: INFO: Pod "pod-subpath-test-configmap-htxj": Phase="Running", Reason="", readiness=true. Elapsed: 10.021591147s
... skipping 2 lines ...
Jun  1 01:39:29.044: INFO: Pod "pod-subpath-test-configmap-htxj": Phase="Running", Reason="", readiness=true. Elapsed: 16.03675667s
Jun  1 01:39:31.048: INFO: Pod "pod-subpath-test-configmap-htxj": Phase="Running", Reason="", readiness=true. Elapsed: 18.040502284s
Jun  1 01:39:33.052: INFO: Pod "pod-subpath-test-configmap-htxj": Phase="Running", Reason="", readiness=true. Elapsed: 20.044364456s
Jun  1 01:39:35.055: INFO: Pod "pod-subpath-test-configmap-htxj": Phase="Running", Reason="", readiness=true. Elapsed: 22.04810372s
Jun  1 01:39:37.059: INFO: Pod "pod-subpath-test-configmap-htxj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.05218503s
STEP: Saw pod success
Jun  1 01:39:37.059: INFO: Pod "pod-subpath-test-configmap-htxj" satisfied condition "Succeeded or Failed"
Jun  1 01:39:37.062: INFO: Trying to get logs from node kind-worker2 pod pod-subpath-test-configmap-htxj container test-container-subpath-configmap-htxj: <nil>
STEP: delete the pod
Jun  1 01:39:37.077: INFO: Waiting for pod pod-subpath-test-configmap-htxj to disappear
Jun  1 01:39:37.080: INFO: Pod pod-subpath-test-configmap-htxj no longer exists
STEP: Deleting pod pod-subpath-test-configmap-htxj
Jun  1 01:39:37.080: INFO: Deleting pod "pod-subpath-test-configmap-htxj" in namespace "subpath-8017"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Jun  1 01:39:37.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8017" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":292,"completed":60,"skipped":901,"failed":0}
SSSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Job
... skipping 21 lines ...
Jun  1 01:39:44.166: INFO: Pod "adopt-release-bv57l": Phase="Running", Reason="", readiness=true. Elapsed: 2.006980674s
Jun  1 01:39:44.166: INFO: Pod "adopt-release-bv57l" satisfied condition "released"
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:175
Jun  1 01:39:44.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-613" for this suite.
•{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":292,"completed":61,"skipped":908,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-359b68ca-e962-42a9-be82-a2a3e0fee434
STEP: Creating a pod to test consume configMaps
Jun  1 01:39:44.214: INFO: Waiting up to 5m0s for pod "pod-configmaps-01d436dd-ccfe-4249-af45-02fef0aaed57" in namespace "configmap-6129" to be "Succeeded or Failed"
Jun  1 01:39:44.217: INFO: Pod "pod-configmaps-01d436dd-ccfe-4249-af45-02fef0aaed57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.563484ms
Jun  1 01:39:46.222: INFO: Pod "pod-configmaps-01d436dd-ccfe-4249-af45-02fef0aaed57": Phase="Running", Reason="", readiness=true. Elapsed: 2.007525015s
Jun  1 01:39:48.225: INFO: Pod "pod-configmaps-01d436dd-ccfe-4249-af45-02fef0aaed57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011309705s
STEP: Saw pod success
Jun  1 01:39:48.225: INFO: Pod "pod-configmaps-01d436dd-ccfe-4249-af45-02fef0aaed57" satisfied condition "Succeeded or Failed"
Jun  1 01:39:48.229: INFO: Trying to get logs from node kind-worker pod pod-configmaps-01d436dd-ccfe-4249-af45-02fef0aaed57 container configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 01:39:48.253: INFO: Waiting for pod pod-configmaps-01d436dd-ccfe-4249-af45-02fef0aaed57 to disappear
Jun  1 01:39:48.256: INFO: Pod pod-configmaps-01d436dd-ccfe-4249-af45-02fef0aaed57 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 01:39:48.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6129" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":292,"completed":62,"skipped":947,"failed":0}
SSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
Jun  1 01:39:48.263: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test override arguments
Jun  1 01:39:48.291: INFO: Waiting up to 5m0s for pod "client-containers-6c7fe93f-85c6-4b16-9ed0-de1134a0588e" in namespace "containers-732" to be "Succeeded or Failed"
Jun  1 01:39:48.293: INFO: Pod "client-containers-6c7fe93f-85c6-4b16-9ed0-de1134a0588e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088212ms
Jun  1 01:39:50.297: INFO: Pod "client-containers-6c7fe93f-85c6-4b16-9ed0-de1134a0588e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005993163s
Jun  1 01:39:52.301: INFO: Pod "client-containers-6c7fe93f-85c6-4b16-9ed0-de1134a0588e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009882934s
STEP: Saw pod success
Jun  1 01:39:52.301: INFO: Pod "client-containers-6c7fe93f-85c6-4b16-9ed0-de1134a0588e" satisfied condition "Succeeded or Failed"
Jun  1 01:39:52.304: INFO: Trying to get logs from node kind-worker2 pod client-containers-6c7fe93f-85c6-4b16-9ed0-de1134a0588e container test-container: <nil>
STEP: delete the pod
Jun  1 01:39:52.317: INFO: Waiting for pod client-containers-6c7fe93f-85c6-4b16-9ed0-de1134a0588e to disappear
Jun  1 01:39:52.319: INFO: Pod client-containers-6c7fe93f-85c6-4b16-9ed0-de1134a0588e no longer exists
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
Jun  1 01:39:52.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-732" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":292,"completed":63,"skipped":952,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 22 lines ...
Jun  1 01:40:02.390: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-9175 /api/v1/namespaces/watch-9175/configmaps/e2e-watch-test-label-changed b8523340-5ac6-424f-9e70-6aaa666c3cb2 5992 0 2020-06-01 01:39:52 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-06-01 01:40:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
Jun  1 01:40:02.391: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-9175 /api/v1/namespaces/watch-9175/configmaps/e2e-watch-test-label-changed b8523340-5ac6-424f-9e70-6aaa666c3cb2 5993 0 2020-06-01 01:39:52 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-06-01 01:40:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Jun  1 01:40:02.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9175" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":292,"completed":64,"skipped":965,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
Jun  1 01:40:02.400: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test override command
Jun  1 01:40:02.435: INFO: Waiting up to 5m0s for pod "client-containers-91208a6b-7d8e-4ffa-8844-85723332acad" in namespace "containers-3960" to be "Succeeded or Failed"
Jun  1 01:40:02.437: INFO: Pod "client-containers-91208a6b-7d8e-4ffa-8844-85723332acad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0947ms
Jun  1 01:40:04.441: INFO: Pod "client-containers-91208a6b-7d8e-4ffa-8844-85723332acad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00576416s
Jun  1 01:40:06.444: INFO: Pod "client-containers-91208a6b-7d8e-4ffa-8844-85723332acad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009247362s
STEP: Saw pod success
Jun  1 01:40:06.444: INFO: Pod "client-containers-91208a6b-7d8e-4ffa-8844-85723332acad" satisfied condition "Succeeded or Failed"
Jun  1 01:40:06.448: INFO: Trying to get logs from node kind-worker2 pod client-containers-91208a6b-7d8e-4ffa-8844-85723332acad container test-container: <nil>
STEP: delete the pod
Jun  1 01:40:06.463: INFO: Waiting for pod client-containers-91208a6b-7d8e-4ffa-8844-85723332acad to disappear
Jun  1 01:40:06.466: INFO: Pod client-containers-91208a6b-7d8e-4ffa-8844-85723332acad no longer exists
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
Jun  1 01:40:06.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3960" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":292,"completed":65,"skipped":988,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 116 lines ...
Jun  1 01:40:39.281: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3555/pods","resourceVersion":"6226"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Jun  1 01:40:39.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3555" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":292,"completed":66,"skipped":1007,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Jun  1 01:40:43.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5823" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":67,"skipped":1025,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 110 lines ...
Jun  1 01:41:35.989: INFO: Waiting for statefulset status.replicas updated to 0
Jun  1 01:41:35.991: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Jun  1 01:41:36.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9430" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":292,"completed":68,"skipped":1095,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  test/e2e/framework/framework.go:175
Jun  1 01:41:41.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4533" for this suite.
STEP: Destroying namespace "webhook-4533-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":292,"completed":69,"skipped":1112,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 8 lines ...
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Jun  1 01:41:48.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5484" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":292,"completed":70,"skipped":1123,"failed":0}
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-ccb221ee-c567-4fac-8b01-942b236c9214
STEP: Creating a pod to test consume configMaps
Jun  1 01:41:48.526: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7e0f5577-d528-4b29-b97a-bf903b48db50" in namespace "projected-1711" to be "Succeeded or Failed"
Jun  1 01:41:48.528: INFO: Pod "pod-projected-configmaps-7e0f5577-d528-4b29-b97a-bf903b48db50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.373578ms
Jun  1 01:41:50.533: INFO: Pod "pod-projected-configmaps-7e0f5577-d528-4b29-b97a-bf903b48db50": Phase="Running", Reason="", readiness=true. Elapsed: 2.007199115s
Jun  1 01:41:52.537: INFO: Pod "pod-projected-configmaps-7e0f5577-d528-4b29-b97a-bf903b48db50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011243917s
STEP: Saw pod success
Jun  1 01:41:52.537: INFO: Pod "pod-projected-configmaps-7e0f5577-d528-4b29-b97a-bf903b48db50" satisfied condition "Succeeded or Failed"
Jun  1 01:41:52.540: INFO: Trying to get logs from node kind-worker2 pod pod-projected-configmaps-7e0f5577-d528-4b29-b97a-bf903b48db50 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 01:41:52.559: INFO: Waiting for pod pod-projected-configmaps-7e0f5577-d528-4b29-b97a-bf903b48db50 to disappear
Jun  1 01:41:52.561: INFO: Pod pod-projected-configmaps-7e0f5577-d528-4b29-b97a-bf903b48db50 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Jun  1 01:41:52.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1711" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":292,"completed":71,"skipped":1127,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Jun  1 01:41:52.601: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-5ce0c33f-893a-4ebc-ae09-950b54209032" in namespace "security-context-test-5859" to be "Succeeded or Failed"
Jun  1 01:41:52.605: INFO: Pod "busybox-readonly-false-5ce0c33f-893a-4ebc-ae09-950b54209032": Phase="Pending", Reason="", readiness=false. Elapsed: 3.114421ms
Jun  1 01:41:54.608: INFO: Pod "busybox-readonly-false-5ce0c33f-893a-4ebc-ae09-950b54209032": Phase="Running", Reason="", readiness=true. Elapsed: 2.006818476s
Jun  1 01:41:56.615: INFO: Pod "busybox-readonly-false-5ce0c33f-893a-4ebc-ae09-950b54209032": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014053065s
Jun  1 01:41:56.616: INFO: Pod "busybox-readonly-false-5ce0c33f-893a-4ebc-ae09-950b54209032" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Jun  1 01:41:56.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5859" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":292,"completed":72,"skipped":1148,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 11 lines ...
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Jun  1 01:41:56.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7179" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":292,"completed":73,"skipped":1177,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 28 lines ...
Jun  1 01:42:05.744: INFO: Pod "test-rolling-update-deployment-df7bb669b-k7t2k" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-df7bb669b-k7t2k test-rolling-update-deployment-df7bb669b- deployment-1405 /api/v1/namespaces/deployment-1405/pods/test-rolling-update-deployment-df7bb669b-k7t2k 6229dbf1-a3ac-4aca-b771-4719eeeea3fb 6901 0 2020-06-01 01:42:01 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:df7bb669b] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-df7bb669b 73c0bf28-91e6-4345-9655-1bdfb2496e31 0xc00289d800 0xc00289d801}] []  [{kube-controller-manager Update v1 2020-06-01 01:42:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"73c0bf28-91e6-4345-9655-1bdfb2496e31\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-01 01:42:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.77\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gnv4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gnv4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gnv4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 01:42:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 01:42:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 01:42:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 01:42:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.77,StartTime:2020-06-01 01:42:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-01 01:42:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://d49864154c5597fe671aa80849bc7a5ee501f9fdd7b0bf0e25a775e895d04414,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.77,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Jun  1 01:42:05.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1405" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":292,"completed":74,"skipped":1200,"failed":0}
SSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 12 lines ...
STEP: Creating secret with name s-test-opt-create-987e49a0-63c0-42f1-86ab-f54b1e58b1db
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Jun  1 01:43:14.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1771" for this suite.
•{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":75,"skipped":1204,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 9 lines ...
STEP: Creating the pod
Jun  1 01:43:18.662: INFO: Successfully updated pod "annotationupdateebfbeb43-e255-4bf6-a971-1fe01923c007"
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Jun  1 01:43:20.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3158" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":292,"completed":76,"skipped":1233,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Jun  1 01:43:20.725: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-732b756c-5c10-4cec-be2f-78b2074d61ab" in namespace "security-context-test-5183" to be "Succeeded or Failed"
Jun  1 01:43:20.728: INFO: Pod "busybox-privileged-false-732b756c-5c10-4cec-be2f-78b2074d61ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.642186ms
Jun  1 01:43:22.732: INFO: Pod "busybox-privileged-false-732b756c-5c10-4cec-be2f-78b2074d61ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006671687s
Jun  1 01:43:22.732: INFO: Pod "busybox-privileged-false-732b756c-5c10-4cec-be2f-78b2074d61ab" satisfied condition "Succeeded or Failed"
Jun  1 01:43:22.739: INFO: Got logs for pod "busybox-privileged-false-732b756c-5c10-4cec-be2f-78b2074d61ab": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Jun  1 01:43:22.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5183" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":77,"skipped":1261,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  1 01:43:22.745: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap that has name configmap-test-emptyKey-cb5e6e07-9191-4004-b949-d45337ae5442
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 01:43:22.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7749" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":292,"completed":78,"skipped":1273,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 27 lines ...
Jun  1 01:43:40.875: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jun  1 01:43:40.879: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
Jun  1 01:43:40.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-975" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":292,"completed":79,"skipped":1298,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 13 lines ...
Jun  1 01:44:00.958: INFO: Restart count of pod container-probe-6614/liveness-a0096167-9fde-4d0f-9f0f-525997a5d105 is now 1 (18.034013778s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Jun  1 01:44:00.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6614" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":292,"completed":80,"skipped":1307,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
Jun  1 01:44:03.032: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Jun  1 01:44:03.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7292" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":292,"completed":81,"skipped":1346,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  test/e2e/framework/framework.go:175
Jun  1 01:44:10.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1728" for this suite.
STEP: Destroying namespace "webhook-1728-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":292,"completed":82,"skipped":1378,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Jun  1 01:44:21.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9547" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":292,"completed":83,"skipped":1383,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-01faba33-8524-4754-a82a-bfd8c2c1eb15
STEP: Creating a pod to test consume secrets
Jun  1 01:44:21.354: INFO: Waiting up to 5m0s for pod "pod-secrets-27f894ff-a9cd-4e24-b481-fd8b1fdc6b76" in namespace "secrets-6040" to be "Succeeded or Failed"
Jun  1 01:44:21.357: INFO: Pod "pod-secrets-27f894ff-a9cd-4e24-b481-fd8b1fdc6b76": Phase="Pending", Reason="", readiness=false. Elapsed: 3.009005ms
Jun  1 01:44:23.362: INFO: Pod "pod-secrets-27f894ff-a9cd-4e24-b481-fd8b1fdc6b76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008025023s
Jun  1 01:44:25.366: INFO: Pod "pod-secrets-27f894ff-a9cd-4e24-b481-fd8b1fdc6b76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012030867s
STEP: Saw pod success
Jun  1 01:44:25.366: INFO: Pod "pod-secrets-27f894ff-a9cd-4e24-b481-fd8b1fdc6b76" satisfied condition "Succeeded or Failed"
Jun  1 01:44:25.369: INFO: Trying to get logs from node kind-worker2 pod pod-secrets-27f894ff-a9cd-4e24-b481-fd8b1fdc6b76 container secret-volume-test: <nil>
STEP: delete the pod
Jun  1 01:44:25.385: INFO: Waiting for pod pod-secrets-27f894ff-a9cd-4e24-b481-fd8b1fdc6b76 to disappear
Jun  1 01:44:25.388: INFO: Pod pod-secrets-27f894ff-a9cd-4e24-b481-fd8b1fdc6b76 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Jun  1 01:44:25.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6040" for this suite.
STEP: Destroying namespace "secret-namespace-7044" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":292,"completed":84,"skipped":1396,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 28 lines ...
Jun  1 01:44:29.387: INFO: Selector matched 1 pods for map[app:agnhost]
Jun  1 01:44:29.387: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 01:44:29.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3621" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":292,"completed":85,"skipped":1400,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 01:44:29.395: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir volume type on tmpfs
Jun  1 01:44:29.431: INFO: Waiting up to 5m0s for pod "pod-a79f24bf-3e2f-4d91-8d7c-32ae19f185d9" in namespace "emptydir-4486" to be "Succeeded or Failed"
Jun  1 01:44:29.433: INFO: Pod "pod-a79f24bf-3e2f-4d91-8d7c-32ae19f185d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071658ms
Jun  1 01:44:31.436: INFO: Pod "pod-a79f24bf-3e2f-4d91-8d7c-32ae19f185d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005048814s
STEP: Saw pod success
Jun  1 01:44:31.436: INFO: Pod "pod-a79f24bf-3e2f-4d91-8d7c-32ae19f185d9" satisfied condition "Succeeded or Failed"
Jun  1 01:44:31.438: INFO: Trying to get logs from node kind-worker pod pod-a79f24bf-3e2f-4d91-8d7c-32ae19f185d9 container test-container: <nil>
STEP: delete the pod
Jun  1 01:44:31.452: INFO: Waiting for pod pod-a79f24bf-3e2f-4d91-8d7c-32ae19f185d9 to disappear
Jun  1 01:44:31.456: INFO: Pod pod-a79f24bf-3e2f-4d91-8d7c-32ae19f185d9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 01:44:31.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4486" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":86,"skipped":1428,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
Jun  1 01:44:31.489: INFO: >>> kubeConfig: /root/.kube/kind-test-config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 01:44:32.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4527" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":292,"completed":87,"skipped":1478,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 14 lines ...
STEP: verifying the updated pod is in kubernetes
Jun  1 01:44:37.075: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Jun  1 01:44:37.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2472" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":292,"completed":88,"skipped":1490,"failed":0}
S
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
Jun  1 01:44:41.125: INFO: Initial restart count of pod liveness-7d288b28-039d-4a88-8d71-d8dea9309561 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Jun  1 01:48:41.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9802" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":292,"completed":89,"skipped":1491,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 40 lines ...
Jun  1 01:48:51.739: INFO: Deleting pod "simpletest-rc-to-be-deleted-tn8p9" in namespace "gc-513"
Jun  1 01:48:51.751: INFO: Deleting pod "simpletest-rc-to-be-deleted-tvjvl" in namespace "gc-513"
[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Jun  1 01:48:51.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-513" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":292,"completed":90,"skipped":1502,"failed":0}
SSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  1 01:48:51.785: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:162
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating the pod
Jun  1 01:48:51.871: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
Jun  1 01:48:58.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4021" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":292,"completed":91,"skipped":1505,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 19 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Jun  1 01:49:14.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9135" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":292,"completed":92,"skipped":1538,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 10 lines ...
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
Jun  1 01:49:17.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5597" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":292,"completed":93,"skipped":1553,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
Jun  1 01:49:19.370: INFO: Initial restart count of pod busybox-13517b30-c9ed-4481-9b41-c437d0ec4c60 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Jun  1 01:53:19.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8294" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":292,"completed":94,"skipped":1574,"failed":0}
S
------------------------------
[sig-network] Services 
  should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 65 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 01:53:59.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8817" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":292,"completed":95,"skipped":1575,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 190 lines ...
Jun  1 01:54:09.797: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jun  1 01:54:09.797: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 01:54:09.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1803" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":292,"completed":96,"skipped":1576,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name projected-secret-test-c5dd6676-f0ce-417d-bede-cf2abec0f084
STEP: Creating a pod to test consume secrets
Jun  1 01:54:09.857: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-59dbbbf9-cc3f-4c34-ba27-f176e9e54874" in namespace "projected-6918" to be "Succeeded or Failed"
Jun  1 01:54:09.863: INFO: Pod "pod-projected-secrets-59dbbbf9-cc3f-4c34-ba27-f176e9e54874": Phase="Pending", Reason="", readiness=false. Elapsed: 5.270302ms
Jun  1 01:54:11.866: INFO: Pod "pod-projected-secrets-59dbbbf9-cc3f-4c34-ba27-f176e9e54874": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008254659s
Jun  1 01:54:13.870: INFO: Pod "pod-projected-secrets-59dbbbf9-cc3f-4c34-ba27-f176e9e54874": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012454773s
STEP: Saw pod success
Jun  1 01:54:13.870: INFO: Pod "pod-projected-secrets-59dbbbf9-cc3f-4c34-ba27-f176e9e54874" satisfied condition "Succeeded or Failed"
Jun  1 01:54:13.873: INFO: Trying to get logs from node kind-worker2 pod pod-projected-secrets-59dbbbf9-cc3f-4c34-ba27-f176e9e54874 container secret-volume-test: <nil>
STEP: delete the pod
Jun  1 01:54:13.897: INFO: Waiting for pod pod-projected-secrets-59dbbbf9-cc3f-4c34-ba27-f176e9e54874 to disappear
Jun  1 01:54:13.900: INFO: Pod pod-projected-secrets-59dbbbf9-cc3f-4c34-ba27-f176e9e54874 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Jun  1 01:54:13.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6918" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":292,"completed":97,"skipped":1626,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Jun  1 01:54:13.933: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-baf984b3-dcb7-4be2-9fcd-f44779e9178c" in namespace "security-context-test-4028" to be "Succeeded or Failed"
Jun  1 01:54:13.936: INFO: Pod "alpine-nnp-false-baf984b3-dcb7-4be2-9fcd-f44779e9178c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.430779ms
Jun  1 01:54:15.940: INFO: Pod "alpine-nnp-false-baf984b3-dcb7-4be2-9fcd-f44779e9178c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007172556s
Jun  1 01:54:17.944: INFO: Pod "alpine-nnp-false-baf984b3-dcb7-4be2-9fcd-f44779e9178c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010533061s
Jun  1 01:54:17.944: INFO: Pod "alpine-nnp-false-baf984b3-dcb7-4be2-9fcd-f44779e9178c" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Jun  1 01:54:17.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-4028" for this suite.
•{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":98,"skipped":1639,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 28 lines ...
Jun  1 01:55:24.974: INFO: Terminating ReplicationController wrapped-volume-race-3c06489d-b7df-4398-9b77-7db8d1ee23c8 pods took: 300.282307ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  test/e2e/framework/framework.go:175
Jun  1 01:55:39.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-8651" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":292,"completed":99,"skipped":1670,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-a8ff94f9-7f8b-4974-a7b0-e1ecc90eb8f5
STEP: Creating a pod to test consume secrets
Jun  1 01:55:39.657: INFO: Waiting up to 5m0s for pod "pod-secrets-de77fac2-7339-4617-8756-d391eaa8fb71" in namespace "secrets-2676" to be "Succeeded or Failed"
Jun  1 01:55:39.660: INFO: Pod "pod-secrets-de77fac2-7339-4617-8756-d391eaa8fb71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.705706ms
Jun  1 01:55:41.664: INFO: Pod "pod-secrets-de77fac2-7339-4617-8756-d391eaa8fb71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007128908s
Jun  1 01:55:43.668: INFO: Pod "pod-secrets-de77fac2-7339-4617-8756-d391eaa8fb71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011018153s
STEP: Saw pod success
Jun  1 01:55:43.668: INFO: Pod "pod-secrets-de77fac2-7339-4617-8756-d391eaa8fb71" satisfied condition "Succeeded or Failed"
Jun  1 01:55:43.671: INFO: Trying to get logs from node kind-worker2 pod pod-secrets-de77fac2-7339-4617-8756-d391eaa8fb71 container secret-volume-test: <nil>
STEP: delete the pod
Jun  1 01:55:43.686: INFO: Waiting for pod pod-secrets-de77fac2-7339-4617-8756-d391eaa8fb71 to disappear
Jun  1 01:55:43.689: INFO: Pod pod-secrets-de77fac2-7339-4617-8756-d391eaa8fb71 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Jun  1 01:55:43.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2676" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":292,"completed":100,"skipped":1722,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 01:55:50.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-3753" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/crd_conversion_webhook.go:137
•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":292,"completed":101,"skipped":1761,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 15 lines ...
Jun  1 01:55:56.040: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  test/e2e/framework/framework.go:597
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 01:56:08.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3749" for this suite.
STEP: Destroying namespace "webhook-3749-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":292,"completed":102,"skipped":1775,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 19 lines ...
Jun  1 01:56:19.279: INFO: stderr: ""
Jun  1 01:56:19.279: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 01:56:19.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8385" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":292,"completed":103,"skipped":1812,"failed":0}
SSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 13 lines ...
Jun  1 01:57:13.433: INFO: Restart count of pod container-probe-3034/busybox-35e91571-deb8-4e07-97d5-61ed44b45b70 is now 1 (52.11030404s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Jun  1 01:57:13.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3034" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":292,"completed":104,"skipped":1816,"failed":0}
SSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 22 lines ...
Jun  1 01:57:16.786: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jun  1 01:57:16.786: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config describe pod agnhost-master-ldf6x --namespace=kubectl-3893'
Jun  1 01:57:17.004: INFO: stderr: ""
Jun  1 01:57:17.004: INFO: stdout: "Name:         agnhost-master-ldf6x\nNamespace:    kubectl-3893\nPriority:     0\nNode:         kind-worker2/172.18.0.2\nStart Time:   Mon, 01 Jun 2020 01:57:14 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  <none>\nStatus:       Running\nIP:           10.244.1.113\nIPs:\n  IP:           10.244.1.113\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://bd865fbabbd49972d2718ec35d52da390c34132efd712e32f7ed08d93b28493b\n    Image:          us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n    Image ID:       us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Mon, 01 Jun 2020 01:57:15 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-njsst (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-njsst:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-njsst\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  <none>\nTolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From                   Message\n  ----    ------     ----  ----                   -------\n  Normal  Scheduled  2s    default-scheduler      Successfully assigned kubectl-3893/agnhost-master-ldf6x to kind-worker2\n  Normal  Pulled     1s    kubelet, kind-worker2  Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\" already present on machine\n  Normal  Created    1s    kubelet, kind-worker2  Created container agnhost-master\n  Normal  Started    1s    kubelet, kind-worker2  Started container agnhost-master\n"
Jun  1 01:57:17.004: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config describe rc agnhost-master --namespace=kubectl-3893'
Jun  1 01:57:17.246: INFO: stderr: ""
Jun  1 01:57:17.247: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-3893\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  3s    replication-controller  Created pod: agnhost-master-ldf6x\n"
Jun  1 01:57:17.247: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config describe service agnhost-master --namespace=kubectl-3893'
Jun  1 01:57:17.441: INFO: stderr: ""
Jun  1 01:57:17.441: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-3893\nLabels:            app=agnhost\n                   role=master\nAnnotations:       <none>\nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.109.216.87\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.244.1.113:6379\nSession Affinity:  None\nEvents:            <none>\n"
Jun  1 01:57:17.445: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config describe node kind-control-plane'
Jun  1 01:57:17.701: INFO: stderr: ""
Jun  1 01:57:17.701: INFO: stdout: "Name:               kind-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=kind-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Mon, 01 Jun 2020 01:25:19 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  kind-control-plane\n  AcquireTime:     <unset>\n  RenewTime:       Mon, 01 Jun 2020 01:57:09 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Mon, 01 Jun 2020 01:55:59 +0000   Mon, 01 Jun 2020 01:25:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Mon, 01 Jun 2020 01:55:59 +0000   Mon, 01 Jun 2020 01:25:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Mon, 01 Jun 2020 01:55:59 +0000   Mon, 01 Jun 2020 01:25:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Mon, 01 Jun 2020 01:55:59 +0000   Mon, 01 Jun 2020 01:25:58 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.3\n  Hostname:    kind-control-plane\nCapacity:\n  cpu:                8\n  ephemeral-storage:  253882800Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             53582972Ki\n  pods:               110\nAllocatable:\n  cpu:                8\n  ephemeral-storage:  253882800Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             53582972Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 3552929634274ff6a2d32f980b417c01\n  System UUID:                7101f2cd-ce10-4543-be5a-6cf74a63f028\n  Boot ID:                    84d23691-f34f-4e7d-b4e8-5c7676e0df3a\n  Kernel Version:             4.15.0-1044-gke\n  OS Image:                   Ubuntu 20.04 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.3.4-12-g1e902b2d\n  Kubelet Version:            v1.19.0-beta.0.316+413bc1a1d238ef\n  Kube-Proxy Version:         v1.19.0-beta.0.316+413bc1a1d238ef\nPodCIDR:                      10.244.0.0/24\nPodCIDRs:                     10.244.0.0/24\nNon-terminated Pods:          (6 in total)\n  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---\n  kube-system                 etcd-kind-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         31m\n  kube-system                 kindnet-7vksf                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      31m\n  kube-system                 kube-apiserver-kind-control-plane             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31m\n  kube-system                 kube-controller-manager-kind-control-plane    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31m\n  kube-system                 kube-proxy-8tdlw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         31m\n  kube-system                 kube-scheduler-kind-control-plane             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31m\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests   Limits\n  --------           --------   ------\n  cpu                650m (8%)  100m (1%)\n  memory             50Mi (0%)  50Mi (0%)\n  ephemeral-storage  0 (0%)     0 (0%)\n  hugepages-1Gi      0 (0%)     0 (0%)\n  hugepages-2Mi      0 (0%)     0 (0%)\nEvents:\n  Type     Reason                    Age                From                            Message\n  ----     ------                    ----               ----                            -------\n  Normal   NodeHasSufficientMemory   32m (x5 over 32m)  kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasSufficientMemory\n  Normal   NodeHasNoDiskPressure     32m (x5 over 32m)  kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID      32m (x4 over 32m)  kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasSufficientPID\n  Normal   Starting                  31m                kubelet, kind-control-plane     Starting kubelet.\n  Warning  CheckLimitsForResolvConf  31m                kubelet, kind-control-plane     Resolv.conf file '/etc/resolv.conf' contains search line consisting of more than 3 domains!\n  Normal   NodeHasSufficientMemory   31m                kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasSufficientMemory\n  Normal   NodeHasNoDiskPressure     31m                kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID      31m                kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasSufficientPID\n  Normal   NodeAllocatableEnforced   31m                kubelet, kind-control-plane     Updated Node Allocatable limit across pods\n  Normal   Starting                  31m                kube-proxy, kind-control-plane  Starting kube-proxy.\n  Normal   NodeReady                 31m                kubelet, kind-control-plane     Node kind-control-plane status is now: NodeReady\n"
Jun  1 01:57:17.701: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config describe namespace kubectl-3893'
Jun  1 01:57:17.889: INFO: stderr: ""
Jun  1 01:57:17.889: INFO: stdout: "Name:         kubectl-3893\nLabels:       e2e-framework=kubectl\n              e2e-run=1e630076-054d-4653-ab34-c4ab3075c580\nAnnotations:  <none>\nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 01:57:17.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3893" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":292,"completed":105,"skipped":1822,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 10 lines ...
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jun  1 01:57:20.445: INFO: Successfully updated pod "pod-update-activedeadlineseconds-45d8a81d-b950-4b6a-8bca-7fd0b6c0bb8c"
Jun  1 01:57:20.445: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-45d8a81d-b950-4b6a-8bca-7fd0b6c0bb8c" in namespace "pods-3189" to be "terminated due to deadline exceeded"
Jun  1 01:57:20.448: INFO: Pod "pod-update-activedeadlineseconds-45d8a81d-b950-4b6a-8bca-7fd0b6c0bb8c": Phase="Running", Reason="", readiness=true. Elapsed: 2.69318ms
Jun  1 01:57:22.458: INFO: Pod "pod-update-activedeadlineseconds-45d8a81d-b950-4b6a-8bca-7fd0b6c0bb8c": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.012401119s
Jun  1 01:57:22.458: INFO: Pod "pod-update-activedeadlineseconds-45d8a81d-b950-4b6a-8bca-7fd0b6c0bb8c" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Jun  1 01:57:22.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3189" for this suite.
•{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":292,"completed":106,"skipped":1829,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 16 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Jun  1 01:57:35.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9987" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":292,"completed":107,"skipped":1850,"failed":0}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 25 lines ...
  test/e2e/framework/framework.go:175
Jun  1 01:57:41.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8801" for this suite.
STEP: Destroying namespace "webhook-8801-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":292,"completed":108,"skipped":1854,"failed":0}
SSSSSS
------------------------------
[sig-scheduling] LimitRange 
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] LimitRange
... skipping 31 lines ...
Jun  1 01:57:48.673: INFO: limitRange is already deleted
STEP: Creating a Pod with more than former max resources
[AfterEach] [sig-scheduling] LimitRange
  test/e2e/framework/framework.go:175
Jun  1 01:57:48.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "limitrange-4606" for this suite.
•{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":292,"completed":109,"skipped":1860,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] Events 
  should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Events
... skipping 11 lines ...
STEP: deleting the test event
STEP: listing all events in all namespaces
[AfterEach] [sig-api-machinery] Events
  test/e2e/framework/framework.go:175
Jun  1 01:57:48.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-9822" for this suite.
•{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":292,"completed":110,"skipped":1868,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...
Jun  1 01:57:50.935: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jun  1 01:57:51.078: INFO: Deleting pod dns-5121...
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Jun  1 01:57:51.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5121" for this suite.
•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":292,"completed":111,"skipped":1879,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
Jun  1 01:57:51.115: INFO: >>> kubeConfig: /root/.kube/kind-test-config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 01:57:51.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8334" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":292,"completed":112,"skipped":1893,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 42 lines ...
Jun  1 01:59:21.938: INFO: Waiting for statefulset status.replicas updated to 0
Jun  1 01:59:21.941: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Jun  1 01:59:21.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6809" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":292,"completed":113,"skipped":1905,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
Jun  1 01:59:21.987: INFO: >>> kubeConfig: /root/.kube/kind-test-config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 01:59:28.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9231" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":292,"completed":114,"skipped":1906,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should patch a secret [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
... skipping 10 lines ...
STEP: deleting the secret using a LabelSelector
STEP: listing secrets in all namespaces, searching for label name and value in patch
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
Jun  1 01:59:28.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6718" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":292,"completed":115,"skipped":1943,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating secret secrets-8461/secret-test-ce1ed3d1-622c-4d28-98fe-61878f4ea62b
STEP: Creating a pod to test consume secrets
Jun  1 01:59:28.272: INFO: Waiting up to 5m0s for pod "pod-configmaps-4e473d93-6a5f-4c46-a974-1f28377aa9b7" in namespace "secrets-8461" to be "Succeeded or Failed"
Jun  1 01:59:28.275: INFO: Pod "pod-configmaps-4e473d93-6a5f-4c46-a974-1f28377aa9b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.884337ms
Jun  1 01:59:30.280: INFO: Pod "pod-configmaps-4e473d93-6a5f-4c46-a974-1f28377aa9b7": Phase="Running", Reason="", readiness=true. Elapsed: 2.007715829s
Jun  1 01:59:32.284: INFO: Pod "pod-configmaps-4e473d93-6a5f-4c46-a974-1f28377aa9b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011495461s
STEP: Saw pod success
Jun  1 01:59:32.284: INFO: Pod "pod-configmaps-4e473d93-6a5f-4c46-a974-1f28377aa9b7" satisfied condition "Succeeded or Failed"
Jun  1 01:59:32.288: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-4e473d93-6a5f-4c46-a974-1f28377aa9b7 container env-test: <nil>
STEP: delete the pod
Jun  1 01:59:32.311: INFO: Waiting for pod pod-configmaps-4e473d93-6a5f-4c46-a974-1f28377aa9b7 to disappear
Jun  1 01:59:32.314: INFO: Pod pod-configmaps-4e473d93-6a5f-4c46-a974-1f28377aa9b7 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
Jun  1 01:59:32.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8461" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":292,"completed":116,"skipped":1969,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-67f5df28-2e38-42ff-9e40-e7a3ecfa429d
STEP: Creating a pod to test consume configMaps
Jun  1 01:59:32.352: INFO: Waiting up to 5m0s for pod "pod-configmaps-02c2d806-d697-425b-8811-a79a238ea66a" in namespace "configmap-6715" to be "Succeeded or Failed"
Jun  1 01:59:32.355: INFO: Pod "pod-configmaps-02c2d806-d697-425b-8811-a79a238ea66a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.842207ms
Jun  1 01:59:34.358: INFO: Pod "pod-configmaps-02c2d806-d697-425b-8811-a79a238ea66a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006238647s
Jun  1 01:59:36.362: INFO: Pod "pod-configmaps-02c2d806-d697-425b-8811-a79a238ea66a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009878151s
STEP: Saw pod success
Jun  1 01:59:36.362: INFO: Pod "pod-configmaps-02c2d806-d697-425b-8811-a79a238ea66a" satisfied condition "Succeeded or Failed"
Jun  1 01:59:36.365: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-02c2d806-d697-425b-8811-a79a238ea66a container configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 01:59:36.379: INFO: Waiting for pod pod-configmaps-02c2d806-d697-425b-8811-a79a238ea66a to disappear
Jun  1 01:59:36.381: INFO: Pod pod-configmaps-02c2d806-d697-425b-8811-a79a238ea66a no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 01:59:36.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6715" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":117,"skipped":2015,"failed":0}
SSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 28 lines ...
Jun  1 01:59:58.603: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jun  1 01:59:58.720: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
Jun  1 01:59:58.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2754" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":292,"completed":118,"skipped":2018,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  1 01:59:58.727: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating the pod with failed condition
STEP: updating the pod
Jun  1 02:01:59.277: INFO: Successfully updated pod "var-expansion-ebc31f03-92de-4af5-b98c-80badf684f83"
STEP: waiting for pod running
STEP: deleting the pod gracefully
Jun  1 02:02:01.284: INFO: Deleting pod "var-expansion-ebc31f03-92de-4af5-b98c-80badf684f83" in namespace "var-expansion-6001"
Jun  1 02:02:01.289: INFO: Wait up to 5m0s for pod "var-expansion-ebc31f03-92de-4af5-b98c-80badf684f83" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Jun  1 02:02:39.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6001" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":292,"completed":119,"skipped":2027,"failed":0}

------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 28 lines ...
  test/e2e/framework/framework.go:175
Jun  1 02:02:54.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6813" for this suite.
STEP: Destroying namespace "webhook-6813-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":292,"completed":120,"skipped":2027,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Jun  1 02:02:54.795: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Jun  1 02:02:54.833: INFO: Waiting up to 5m0s for pod "downward-api-92c0fbf9-44a8-4b9b-91dd-2d64a0d3fd71" in namespace "downward-api-1134" to be "Succeeded or Failed"
Jun  1 02:02:54.839: INFO: Pod "downward-api-92c0fbf9-44a8-4b9b-91dd-2d64a0d3fd71": Phase="Pending", Reason="", readiness=false. Elapsed: 5.807912ms
Jun  1 02:02:56.843: INFO: Pod "downward-api-92c0fbf9-44a8-4b9b-91dd-2d64a0d3fd71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010064236s
Jun  1 02:02:58.848: INFO: Pod "downward-api-92c0fbf9-44a8-4b9b-91dd-2d64a0d3fd71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014817617s
STEP: Saw pod success
Jun  1 02:02:58.848: INFO: Pod "downward-api-92c0fbf9-44a8-4b9b-91dd-2d64a0d3fd71" satisfied condition "Succeeded or Failed"
Jun  1 02:02:58.850: INFO: Trying to get logs from node kind-worker pod downward-api-92c0fbf9-44a8-4b9b-91dd-2d64a0d3fd71 container dapi-container: <nil>
STEP: delete the pod
Jun  1 02:02:58.875: INFO: Waiting for pod downward-api-92c0fbf9-44a8-4b9b-91dd-2d64a0d3fd71 to disappear
Jun  1 02:02:58.877: INFO: Pod downward-api-92c0fbf9-44a8-4b9b-91dd-2d64a0d3fd71 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Jun  1 02:02:58.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1134" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":292,"completed":121,"skipped":2044,"failed":0}
SSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Jun  1 02:02:58.884: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Jun  1 02:02:58.913: INFO: Waiting up to 5m0s for pod "downward-api-8ade2f27-59fa-4c75-b0c9-b349724fe1be" in namespace "downward-api-9625" to be "Succeeded or Failed"
Jun  1 02:02:58.916: INFO: Pod "downward-api-8ade2f27-59fa-4c75-b0c9-b349724fe1be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.586287ms
Jun  1 02:03:00.920: INFO: Pod "downward-api-8ade2f27-59fa-4c75-b0c9-b349724fe1be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007014894s
Jun  1 02:03:02.926: INFO: Pod "downward-api-8ade2f27-59fa-4c75-b0c9-b349724fe1be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012321107s
STEP: Saw pod success
Jun  1 02:03:02.926: INFO: Pod "downward-api-8ade2f27-59fa-4c75-b0c9-b349724fe1be" satisfied condition "Succeeded or Failed"
Jun  1 02:03:02.930: INFO: Trying to get logs from node kind-worker2 pod downward-api-8ade2f27-59fa-4c75-b0c9-b349724fe1be container dapi-container: <nil>
STEP: delete the pod
Jun  1 02:03:02.957: INFO: Waiting for pod downward-api-8ade2f27-59fa-4c75-b0c9-b349724fe1be to disappear
Jun  1 02:03:02.960: INFO: Pod downward-api-8ade2f27-59fa-4c75-b0c9-b349724fe1be no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Jun  1 02:03:02.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9625" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":292,"completed":122,"skipped":2047,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 50 lines ...
Jun  1 02:03:15.370: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8841/pods","resourceVersion":"13948"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Jun  1 02:03:15.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8841" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":292,"completed":123,"skipped":2071,"failed":0}
SSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] Events
... skipping 16 lines ...
Jun  1 02:03:21.447: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  test/e2e/framework/framework.go:175
Jun  1 02:03:21.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-4802" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":292,"completed":124,"skipped":2076,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 25 lines ...
  test/e2e/framework/framework.go:175
Jun  1 02:03:25.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6473" for this suite.
STEP: Destroying namespace "webhook-6473-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":292,"completed":125,"skipped":2077,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 13 lines ...
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Jun  1 02:03:42.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2377" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":292,"completed":126,"skipped":2092,"failed":0}
SSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 12 lines ...
Jun  1 02:03:47.640: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
Jun  1 02:03:48.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9929" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":292,"completed":127,"skipped":2095,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Jun  1 02:03:48.889: INFO: stderr: ""
Jun  1 02:03:48.889: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-beta.0.316+413bc1a1d238ef\", GitCommit:\"413bc1a1d238efb7c4ba9e3aac2c381c93295aec\", GitTreeState:\"clean\", BuildDate:\"2020-05-31T23:17:53Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-beta.0.316+413bc1a1d238ef\", GitCommit:\"413bc1a1d238efb7c4ba9e3aac2c381c93295aec\", GitTreeState:\"clean\", BuildDate:\"2020-05-31T23:17:53Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 02:03:48.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3963" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":292,"completed":128,"skipped":2109,"failed":0}

------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 42 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
Jun  1 02:03:57.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5466" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":292,"completed":129,"skipped":2109,"failed":0}
SS
------------------------------
[sig-network] Services 
  should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 71 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 02:04:19.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6577" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":292,"completed":130,"skipped":2111,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 02:04:19.303: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jun  1 02:04:19.345: INFO: Waiting up to 5m0s for pod "pod-ad066c96-42d0-4d58-bda3-bfb901ff87d2" in namespace "emptydir-1092" to be "Succeeded or Failed"
Jun  1 02:04:19.348: INFO: Pod "pod-ad066c96-42d0-4d58-bda3-bfb901ff87d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.71266ms
Jun  1 02:04:21.355: INFO: Pod "pod-ad066c96-42d0-4d58-bda3-bfb901ff87d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009841848s
Jun  1 02:04:23.359: INFO: Pod "pod-ad066c96-42d0-4d58-bda3-bfb901ff87d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013583349s
STEP: Saw pod success
Jun  1 02:04:23.359: INFO: Pod "pod-ad066c96-42d0-4d58-bda3-bfb901ff87d2" satisfied condition "Succeeded or Failed"
Jun  1 02:04:23.361: INFO: Trying to get logs from node kind-worker2 pod pod-ad066c96-42d0-4d58-bda3-bfb901ff87d2 container test-container: <nil>
STEP: delete the pod
Jun  1 02:04:23.377: INFO: Waiting for pod pod-ad066c96-42d0-4d58-bda3-bfb901ff87d2 to disappear
Jun  1 02:04:23.379: INFO: Pod pod-ad066c96-42d0-4d58-bda3-bfb901ff87d2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 02:04:23.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1092" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":131,"skipped":2118,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 02:04:23.420: INFO: Waiting up to 5m0s for pod "downwardapi-volume-13d2e0f1-fa72-4653-9cba-a23a41908daa" in namespace "projected-8771" to be "Succeeded or Failed"
Jun  1 02:04:23.423: INFO: Pod "downwardapi-volume-13d2e0f1-fa72-4653-9cba-a23a41908daa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.435302ms
Jun  1 02:04:25.429: INFO: Pod "downwardapi-volume-13d2e0f1-fa72-4653-9cba-a23a41908daa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00842436s
Jun  1 02:04:27.435: INFO: Pod "downwardapi-volume-13d2e0f1-fa72-4653-9cba-a23a41908daa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014248482s
STEP: Saw pod success
Jun  1 02:04:27.435: INFO: Pod "downwardapi-volume-13d2e0f1-fa72-4653-9cba-a23a41908daa" satisfied condition "Succeeded or Failed"
Jun  1 02:04:27.439: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-13d2e0f1-fa72-4653-9cba-a23a41908daa container client-container: <nil>
STEP: delete the pod
Jun  1 02:04:27.450: INFO: Waiting for pod downwardapi-volume-13d2e0f1-fa72-4653-9cba-a23a41908daa to disappear
Jun  1 02:04:27.453: INFO: Pod downwardapi-volume-13d2e0f1-fa72-4653-9cba-a23a41908daa no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Jun  1 02:04:27.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8771" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":292,"completed":132,"skipped":2125,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 36 lines ...
[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-512, will wait for the garbage collector to delete the pods
Jun  1 02:04:36.636: INFO: Deleting DaemonSet.extensions daemon-set took: 5.651254ms
Jun  1 02:04:36.936: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.283023ms
Jun  1 02:24:36.936: INFO: ERROR: Pod "daemon-set-qd5w7" still exists. Node: "kind-worker2"
Jun  1 02:24:36.937: FAIL: Unexpected error:
    <*errors.errorString | 0xc003708670>: {
        s: "error while waiting for pods gone daemon-set: there are 1 pods left. E.g. \"daemon-set-qd5w7\" on node \"kind-worker2\"",
    }
    error while waiting for pods gone daemon-set: there are 1 pods left. E.g. "daemon-set-qd5w7" on node "kind-worker2"
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/apps.glob..func3.1()
	test/e2e/apps/daemon_set.go:107 +0x429
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001a03000)
... skipping 22 lines ...
Jun  1 02:24:36.943: INFO: At 2020-06-01 02:04:31 +0000 UTC - event for daemon-set-2p8tl: {kubelet kind-worker2} Killing: Stopping container app
Jun  1 02:24:36.943: INFO: At 2020-06-01 02:04:33 +0000 UTC - event for daemon-set: {daemonset-controller } SuccessfulCreate: Created pod: daemon-set-nzt5x
Jun  1 02:24:36.943: INFO: At 2020-06-01 02:04:33 +0000 UTC - event for daemon-set-nzt5x: {default-scheduler } Scheduled: Successfully assigned daemonsets-512/daemon-set-nzt5x to kind-worker2
Jun  1 02:24:36.943: INFO: At 2020-06-01 02:04:34 +0000 UTC - event for daemon-set: {daemonset-controller } SuccessfulDelete: Deleted pod: daemon-set-nzt5x
Jun  1 02:24:36.943: INFO: At 2020-06-01 02:04:34 +0000 UTC - event for daemon-set-nzt5x: {kubelet kind-worker2} Pulling: Pulling image "foo:non-existent"
Jun  1 02:24:36.943: INFO: At 2020-06-01 02:04:35 +0000 UTC - event for daemon-set: {daemonset-controller } SuccessfulCreate: Created pod: daemon-set-qd5w7
Jun  1 02:24:36.943: INFO: At 2020-06-01 02:04:35 +0000 UTC - event for daemon-set-nzt5x: {kubelet kind-worker2} Failed: Failed to pull image "foo:non-existent": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/foo:non-existent": failed to resolve reference "docker.io/library/foo:non-existent": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
Jun  1 02:24:36.943: INFO: At 2020-06-01 02:04:35 +0000 UTC - event for daemon-set-nzt5x: {kubelet kind-worker2} Failed: Error: ErrImagePull
Jun  1 02:24:36.943: INFO: At 2020-06-01 02:04:35 +0000 UTC - event for daemon-set-qd5w7: {default-scheduler } Scheduled: Successfully assigned daemonsets-512/daemon-set-qd5w7 to kind-worker2
Jun  1 02:24:36.943: INFO: At 2020-06-01 02:04:36 +0000 UTC - event for daemon-set-bwmws: {kubelet kind-worker} Killing: Stopping container app
Jun  1 02:24:36.943: INFO: At 2020-06-01 02:04:36 +0000 UTC - event for daemon-set-qd5w7: {kubelet kind-worker2} Created: Created container app
Jun  1 02:24:36.943: INFO: At 2020-06-01 02:04:36 +0000 UTC - event for daemon-set-qd5w7: {kubelet kind-worker2} Pulled: Container image "docker.io/library/httpd:2.4.38-alpine" already present on machine
Jun  1 02:24:36.943: INFO: At 2020-06-01 02:04:37 +0000 UTC - event for daemon-set-qd5w7: {kubelet kind-worker2} Started: Started container app
Jun  1 02:24:36.947: INFO: POD               NODE          PHASE    GRACE  CONDITIONS
... skipping 63 lines ...
• Failure in Spec Teardown (AfterEach) [1209.858 seconds]
[sig-apps] Daemon set [Serial]
test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance] [AfterEach]
  test/e2e/framework/framework.go:597

  Jun  1 02:24:36.937: Unexpected error:
      <*errors.errorString | 0xc003708670>: {
          s: "error while waiting for pods gone daemon-set: there are 1 pods left. E.g. \"daemon-set-qd5w7\" on node \"kind-worker2\"",
      }
      error while waiting for pods gone daemon-set: there are 1 pods left. E.g. "daemon-set-qd5w7" on node "kind-worker2"
  occurred

  test/e2e/apps/daemon_set.go:107
------------------------------
{"msg":"FAILED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":292,"completed":132,"skipped":2156,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Service endpoints latency
... skipping 418 lines ...
Jun  1 02:24:49.077: INFO: 99 %ile: 769.716387ms
Jun  1 02:24:49.077: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  test/e2e/framework/framework.go:175
Jun  1 02:24:49.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-6976" for this suite.
•{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":292,"completed":133,"skipped":2170,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 02:24:49.136: INFO: Waiting up to 5m0s for pod "downwardapi-volume-16e23919-5e46-4026-9cbc-291a52e3eabf" in namespace "projected-4323" to be "Succeeded or Failed"
Jun  1 02:24:49.141: INFO: Pod "downwardapi-volume-16e23919-5e46-4026-9cbc-291a52e3eabf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.748753ms
Jun  1 02:24:51.145: INFO: Pod "downwardapi-volume-16e23919-5e46-4026-9cbc-291a52e3eabf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008956604s
Jun  1 02:24:53.149: INFO: Pod "downwardapi-volume-16e23919-5e46-4026-9cbc-291a52e3eabf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01348522s
STEP: Saw pod success
Jun  1 02:24:53.150: INFO: Pod "downwardapi-volume-16e23919-5e46-4026-9cbc-291a52e3eabf" satisfied condition "Succeeded or Failed"
Jun  1 02:24:53.153: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-16e23919-5e46-4026-9cbc-291a52e3eabf container client-container: <nil>
STEP: delete the pod
Jun  1 02:24:53.168: INFO: Waiting for pod downwardapi-volume-16e23919-5e46-4026-9cbc-291a52e3eabf to disappear
Jun  1 02:24:53.171: INFO: Pod downwardapi-volume-16e23919-5e46-4026-9cbc-291a52e3eabf no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Jun  1 02:24:53.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4323" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":292,"completed":134,"skipped":2249,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 28 lines ...
Jun  1 02:25:09.283: INFO: stderr: ""
Jun  1 02:25:09.283: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 02:25:09.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9710" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":292,"completed":135,"skipped":2268,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 26 lines ...
Jun  1 02:25:13.463: INFO: Unable to read jessie_udp@dns-test-service.dns-4381 from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:13.467: INFO: Unable to read jessie_tcp@dns-test-service.dns-4381 from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:13.469: INFO: Unable to read jessie_udp@dns-test-service.dns-4381.svc from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:13.473: INFO: Unable to read jessie_tcp@dns-test-service.dns-4381.svc from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:13.476: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4381.svc from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:13.478: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4381.svc from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:13.496: INFO: Lookups using dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4381 wheezy_tcp@dns-test-service.dns-4381 wheezy_udp@dns-test-service.dns-4381.svc wheezy_tcp@dns-test-service.dns-4381.svc wheezy_udp@_http._tcp.dns-test-service.dns-4381.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4381.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4381 jessie_tcp@dns-test-service.dns-4381 jessie_udp@dns-test-service.dns-4381.svc jessie_tcp@dns-test-service.dns-4381.svc jessie_udp@_http._tcp.dns-test-service.dns-4381.svc jessie_tcp@_http._tcp.dns-test-service.dns-4381.svc]

Jun  1 02:25:18.500: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:18.504: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:18.507: INFO: Unable to read wheezy_udp@dns-test-service.dns-4381 from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:18.511: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4381 from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:18.514: INFO: Unable to read wheezy_udp@dns-test-service.dns-4381.svc from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
... skipping 5 lines ...
Jun  1 02:25:18.548: INFO: Unable to read jessie_udp@dns-test-service.dns-4381 from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:18.551: INFO: Unable to read jessie_tcp@dns-test-service.dns-4381 from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:18.553: INFO: Unable to read jessie_udp@dns-test-service.dns-4381.svc from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:18.556: INFO: Unable to read jessie_tcp@dns-test-service.dns-4381.svc from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:18.558: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4381.svc from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:18.561: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4381.svc from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:18.577: INFO: Lookups using dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4381 wheezy_tcp@dns-test-service.dns-4381 wheezy_udp@dns-test-service.dns-4381.svc wheezy_tcp@dns-test-service.dns-4381.svc wheezy_udp@_http._tcp.dns-test-service.dns-4381.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4381.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4381 jessie_tcp@dns-test-service.dns-4381 jessie_udp@dns-test-service.dns-4381.svc jessie_tcp@dns-test-service.dns-4381.svc jessie_udp@_http._tcp.dns-test-service.dns-4381.svc jessie_tcp@_http._tcp.dns-test-service.dns-4381.svc]

Jun  1 02:25:23.500: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:23.503: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:23.506: INFO: Unable to read wheezy_udp@dns-test-service.dns-4381 from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:23.509: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4381 from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:23.512: INFO: Unable to read wheezy_udp@dns-test-service.dns-4381.svc from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
... skipping 5 lines ...
Jun  1 02:25:23.545: INFO: Unable to read jessie_udp@dns-test-service.dns-4381 from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:23.547: INFO: Unable to read jessie_tcp@dns-test-service.dns-4381 from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:23.550: INFO: Unable to read jessie_udp@dns-test-service.dns-4381.svc from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:23.553: INFO: Unable to read jessie_tcp@dns-test-service.dns-4381.svc from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:23.556: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4381.svc from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:23.558: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4381.svc from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:23.575: INFO: Lookups using dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4381 wheezy_tcp@dns-test-service.dns-4381 wheezy_udp@dns-test-service.dns-4381.svc wheezy_tcp@dns-test-service.dns-4381.svc wheezy_udp@_http._tcp.dns-test-service.dns-4381.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4381.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4381 jessie_tcp@dns-test-service.dns-4381 jessie_udp@dns-test-service.dns-4381.svc jessie_tcp@dns-test-service.dns-4381.svc jessie_udp@_http._tcp.dns-test-service.dns-4381.svc jessie_tcp@_http._tcp.dns-test-service.dns-4381.svc]

Jun  1 02:25:28.500: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:28.504: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:28.509: INFO: Unable to read wheezy_udp@dns-test-service.dns-4381 from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:28.512: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4381 from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:28.515: INFO: Unable to read wheezy_udp@dns-test-service.dns-4381.svc from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
... skipping 5 lines ...
Jun  1 02:25:28.549: INFO: Unable to read jessie_udp@dns-test-service.dns-4381 from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:28.552: INFO: Unable to read jessie_tcp@dns-test-service.dns-4381 from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:28.554: INFO: Unable to read jessie_udp@dns-test-service.dns-4381.svc from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:28.557: INFO: Unable to read jessie_tcp@dns-test-service.dns-4381.svc from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:28.560: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4381.svc from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:28.563: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4381.svc from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:28.578: INFO: Lookups using dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4381 wheezy_tcp@dns-test-service.dns-4381 wheezy_udp@dns-test-service.dns-4381.svc wheezy_tcp@dns-test-service.dns-4381.svc wheezy_udp@_http._tcp.dns-test-service.dns-4381.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4381.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4381 jessie_tcp@dns-test-service.dns-4381 jessie_udp@dns-test-service.dns-4381.svc jessie_tcp@dns-test-service.dns-4381.svc jessie_udp@_http._tcp.dns-test-service.dns-4381.svc jessie_tcp@_http._tcp.dns-test-service.dns-4381.svc]

Jun  1 02:25:33.499: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:33.503: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:33.506: INFO: Unable to read wheezy_udp@dns-test-service.dns-4381 from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:33.509: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4381 from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:33.512: INFO: Unable to read wheezy_udp@dns-test-service.dns-4381.svc from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
... skipping 5 lines ...
Jun  1 02:25:33.545: INFO: Unable to read jessie_udp@dns-test-service.dns-4381 from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:33.548: INFO: Unable to read jessie_tcp@dns-test-service.dns-4381 from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:33.551: INFO: Unable to read jessie_udp@dns-test-service.dns-4381.svc from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:33.554: INFO: Unable to read jessie_tcp@dns-test-service.dns-4381.svc from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:33.556: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4381.svc from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:33.560: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4381.svc from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:33.577: INFO: Lookups using dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4381 wheezy_tcp@dns-test-service.dns-4381 wheezy_udp@dns-test-service.dns-4381.svc wheezy_tcp@dns-test-service.dns-4381.svc wheezy_udp@_http._tcp.dns-test-service.dns-4381.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4381.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4381 jessie_tcp@dns-test-service.dns-4381 jessie_udp@dns-test-service.dns-4381.svc jessie_tcp@dns-test-service.dns-4381.svc jessie_udp@_http._tcp.dns-test-service.dns-4381.svc jessie_tcp@_http._tcp.dns-test-service.dns-4381.svc]

Jun  1 02:25:38.500: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:38.503: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:38.507: INFO: Unable to read wheezy_udp@dns-test-service.dns-4381 from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:38.510: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4381 from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:38.512: INFO: Unable to read wheezy_udp@dns-test-service.dns-4381.svc from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
... skipping 5 lines ...
Jun  1 02:25:38.551: INFO: Unable to read jessie_udp@dns-test-service.dns-4381 from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:38.553: INFO: Unable to read jessie_tcp@dns-test-service.dns-4381 from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:38.557: INFO: Unable to read jessie_udp@dns-test-service.dns-4381.svc from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:38.560: INFO: Unable to read jessie_tcp@dns-test-service.dns-4381.svc from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:38.563: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4381.svc from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:38.565: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4381.svc from pod dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201: the server could not find the requested resource (get pods dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201)
Jun  1 02:25:38.583: INFO: Lookups using dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4381 wheezy_tcp@dns-test-service.dns-4381 wheezy_udp@dns-test-service.dns-4381.svc wheezy_tcp@dns-test-service.dns-4381.svc wheezy_udp@_http._tcp.dns-test-service.dns-4381.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4381.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4381 jessie_tcp@dns-test-service.dns-4381 jessie_udp@dns-test-service.dns-4381.svc jessie_tcp@dns-test-service.dns-4381.svc jessie_udp@_http._tcp.dns-test-service.dns-4381.svc jessie_tcp@_http._tcp.dns-test-service.dns-4381.svc]

Jun  1 02:25:43.584: INFO: DNS probes using dns-4381/dns-test-ee8dfc36-90ef-42d1-a37f-c540ba2cc201 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Jun  1 02:25:43.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4381" for this suite.
•{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":292,"completed":136,"skipped":2270,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 12 lines ...
STEP: reading a file in the container
Jun  1 02:25:48.991: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl exec --namespace=svcaccounts-3431 pod-service-account-f2ef2413-771a-4677-92ef-0a4ea7e153fb -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  test/e2e/framework/framework.go:175
Jun  1 02:25:49.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-3431" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":292,"completed":137,"skipped":2299,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-21e157cc-751b-4e1f-9ac6-64004359d0ea
STEP: Creating a pod to test consume secrets
Jun  1 02:25:49.390: INFO: Waiting up to 5m0s for pod "pod-secrets-c8f09f61-1e83-4d6b-aa45-90b78d7341d5" in namespace "secrets-6702" to be "Succeeded or Failed"
Jun  1 02:25:49.392: INFO: Pod "pod-secrets-c8f09f61-1e83-4d6b-aa45-90b78d7341d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.580525ms
Jun  1 02:25:51.397: INFO: Pod "pod-secrets-c8f09f61-1e83-4d6b-aa45-90b78d7341d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007062208s
Jun  1 02:25:53.401: INFO: Pod "pod-secrets-c8f09f61-1e83-4d6b-aa45-90b78d7341d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010779105s
STEP: Saw pod success
Jun  1 02:25:53.401: INFO: Pod "pod-secrets-c8f09f61-1e83-4d6b-aa45-90b78d7341d5" satisfied condition "Succeeded or Failed"
Jun  1 02:25:53.404: INFO: Trying to get logs from node kind-worker pod pod-secrets-c8f09f61-1e83-4d6b-aa45-90b78d7341d5 container secret-volume-test: <nil>
STEP: delete the pod
Jun  1 02:25:53.420: INFO: Waiting for pod pod-secrets-c8f09f61-1e83-4d6b-aa45-90b78d7341d5 to disappear
Jun  1 02:25:53.422: INFO: Pod pod-secrets-c8f09f61-1e83-4d6b-aa45-90b78d7341d5 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Jun  1 02:25:53.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6702" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":292,"completed":138,"skipped":2301,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 8 lines ...
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  test/e2e/framework/framework.go:175
Jun  1 02:25:57.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-3913" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":292,"completed":139,"skipped":2306,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Job
... skipping 12 lines ...
Jun  1 02:26:01.905: INFO: Terminating Job.batch foo pods took: 300.304124ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:175
Jun  1 02:26:33.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-7235" for this suite.
•{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":292,"completed":140,"skipped":2312,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 12 lines ...
Jun  1 02:26:37.967: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jun  1 02:26:38.101: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 02:26:38.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7416" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":292,"completed":141,"skipped":2320,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 19 lines ...
Jun  1 02:26:40.958: INFO: Deleting pod "var-expansion-b7760f9b-6b28-4de4-9c9b-6cf712bf4240" in namespace "var-expansion-636"
Jun  1 02:26:40.961: INFO: Wait up to 5m0s for pod "var-expansion-b7760f9b-6b28-4de4-9c9b-6cf712bf4240" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Jun  1 02:27:20.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-636" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":292,"completed":142,"skipped":2338,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 58 lines ...
Jun  1 02:27:39.524: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9637/pods","resourceVersion":"20714"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Jun  1 02:27:39.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9637" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":292,"completed":143,"skipped":2341,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 02:27:39.567: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c659ed67-8765-41fa-9edf-dce92fef800e" in namespace "downward-api-6420" to be "Succeeded or Failed"
Jun  1 02:27:39.572: INFO: Pod "downwardapi-volume-c659ed67-8765-41fa-9edf-dce92fef800e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.216579ms
Jun  1 02:27:41.577: INFO: Pod "downwardapi-volume-c659ed67-8765-41fa-9edf-dce92fef800e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010287916s
Jun  1 02:27:43.581: INFO: Pod "downwardapi-volume-c659ed67-8765-41fa-9edf-dce92fef800e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013930853s
STEP: Saw pod success
Jun  1 02:27:43.581: INFO: Pod "downwardapi-volume-c659ed67-8765-41fa-9edf-dce92fef800e" satisfied condition "Succeeded or Failed"
Jun  1 02:27:43.583: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-c659ed67-8765-41fa-9edf-dce92fef800e container client-container: <nil>
STEP: delete the pod
Jun  1 02:27:43.609: INFO: Waiting for pod downwardapi-volume-c659ed67-8765-41fa-9edf-dce92fef800e to disappear
Jun  1 02:27:43.611: INFO: Pod downwardapi-volume-c659ed67-8765-41fa-9edf-dce92fef800e no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Jun  1 02:27:43.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6420" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":292,"completed":144,"skipped":2344,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-1bcd65ce-f1da-4618-8cab-c2aa227d8801
STEP: Creating a pod to test consume secrets
Jun  1 02:27:43.651: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f62dafaf-dd50-4c63-b1a6-1172b2f02bdb" in namespace "projected-1806" to be "Succeeded or Failed"
Jun  1 02:27:43.653: INFO: Pod "pod-projected-secrets-f62dafaf-dd50-4c63-b1a6-1172b2f02bdb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.237771ms
Jun  1 02:27:45.656: INFO: Pod "pod-projected-secrets-f62dafaf-dd50-4c63-b1a6-1172b2f02bdb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005540385s
Jun  1 02:27:47.661: INFO: Pod "pod-projected-secrets-f62dafaf-dd50-4c63-b1a6-1172b2f02bdb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01032435s
STEP: Saw pod success
Jun  1 02:27:47.661: INFO: Pod "pod-projected-secrets-f62dafaf-dd50-4c63-b1a6-1172b2f02bdb" satisfied condition "Succeeded or Failed"
Jun  1 02:27:47.665: INFO: Trying to get logs from node kind-worker pod pod-projected-secrets-f62dafaf-dd50-4c63-b1a6-1172b2f02bdb container projected-secret-volume-test: <nil>
STEP: delete the pod
Jun  1 02:27:47.689: INFO: Waiting for pod pod-projected-secrets-f62dafaf-dd50-4c63-b1a6-1172b2f02bdb to disappear
Jun  1 02:27:47.693: INFO: Pod pod-projected-secrets-f62dafaf-dd50-4c63-b1a6-1172b2f02bdb no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Jun  1 02:27:47.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1806" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":145,"skipped":2388,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 14 lines ...
Jun  1 02:27:52.742: INFO: Trying to dial the pod
Jun  1 02:27:57.754: INFO: Controller my-hostname-basic-ad2e1e94-fea7-4d9c-8159-1a7606753726: Got expected result from replica 1 [my-hostname-basic-ad2e1e94-fea7-4d9c-8159-1a7606753726-r4tf4]: "my-hostname-basic-ad2e1e94-fea7-4d9c-8159-1a7606753726-r4tf4", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
Jun  1 02:27:57.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9490" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":292,"completed":146,"skipped":2426,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
Jun  1 02:27:59.808: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Jun  1 02:27:59.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3434" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":292,"completed":147,"skipped":2452,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
S
------------------------------
[sig-network] Services 
  should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 70 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 02:28:59.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2126" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":292,"completed":148,"skipped":2453,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 02:28:59.565: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on node default medium
Jun  1 02:28:59.611: INFO: Waiting up to 5m0s for pod "pod-5640fc8f-edca-4017-b311-68a9a5aede8a" in namespace "emptydir-5107" to be "Succeeded or Failed"
Jun  1 02:28:59.614: INFO: Pod "pod-5640fc8f-edca-4017-b311-68a9a5aede8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.317567ms
Jun  1 02:29:01.617: INFO: Pod "pod-5640fc8f-edca-4017-b311-68a9a5aede8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005789394s
STEP: Saw pod success
Jun  1 02:29:01.617: INFO: Pod "pod-5640fc8f-edca-4017-b311-68a9a5aede8a" satisfied condition "Succeeded or Failed"
Jun  1 02:29:01.620: INFO: Trying to get logs from node kind-worker pod pod-5640fc8f-edca-4017-b311-68a9a5aede8a container test-container: <nil>
STEP: delete the pod
Jun  1 02:29:01.637: INFO: Waiting for pod pod-5640fc8f-edca-4017-b311-68a9a5aede8a to disappear
Jun  1 02:29:01.640: INFO: Pod pod-5640fc8f-edca-4017-b311-68a9a5aede8a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 02:29:01.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5107" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":149,"skipped":2459,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 52 lines ...
Jun  1 02:29:11.975: INFO: stderr: ""
Jun  1 02:29:11.975: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 02:29:11.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6902" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":292,"completed":150,"skipped":2479,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
Jun  1 02:29:11.984: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test override all
Jun  1 02:29:12.022: INFO: Waiting up to 5m0s for pod "client-containers-1f7ba017-4104-41df-868c-9af760e440cf" in namespace "containers-7076" to be "Succeeded or Failed"
Jun  1 02:29:12.025: INFO: Pod "client-containers-1f7ba017-4104-41df-868c-9af760e440cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089962ms
Jun  1 02:29:14.028: INFO: Pod "client-containers-1f7ba017-4104-41df-868c-9af760e440cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005655201s
Jun  1 02:29:16.032: INFO: Pod "client-containers-1f7ba017-4104-41df-868c-9af760e440cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009356241s
STEP: Saw pod success
Jun  1 02:29:16.032: INFO: Pod "client-containers-1f7ba017-4104-41df-868c-9af760e440cf" satisfied condition "Succeeded or Failed"
Jun  1 02:29:16.034: INFO: Trying to get logs from node kind-worker pod client-containers-1f7ba017-4104-41df-868c-9af760e440cf container test-container: <nil>
STEP: delete the pod
Jun  1 02:29:16.051: INFO: Waiting for pod client-containers-1f7ba017-4104-41df-868c-9af760e440cf to disappear
Jun  1 02:29:16.053: INFO: Pod client-containers-1f7ba017-4104-41df-868c-9af760e440cf no longer exists
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
Jun  1 02:29:16.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7076" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":292,"completed":151,"skipped":2496,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should patch a Namespace [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 9 lines ...
STEP: get the Namespace and ensuring it has the label
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  test/e2e/framework/framework.go:175
Jun  1 02:29:16.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-255" for this suite.
STEP: Destroying namespace "nspatchtest-dcc8174b-e0f9-4968-b282-6a4ab0c51ac4-2388" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":292,"completed":152,"skipped":2500,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-configmap-4sc7
STEP: Creating a pod to test atomic-volume-subpath
Jun  1 02:29:16.148: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-4sc7" in namespace "subpath-1762" to be "Succeeded or Failed"
Jun  1 02:29:16.153: INFO: Pod "pod-subpath-test-configmap-4sc7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.564626ms
Jun  1 02:29:18.156: INFO: Pod "pod-subpath-test-configmap-4sc7": Phase="Running", Reason="", readiness=true. Elapsed: 2.007860063s
Jun  1 02:29:20.160: INFO: Pod "pod-subpath-test-configmap-4sc7": Phase="Running", Reason="", readiness=true. Elapsed: 4.011884665s
Jun  1 02:29:22.164: INFO: Pod "pod-subpath-test-configmap-4sc7": Phase="Running", Reason="", readiness=true. Elapsed: 6.016096667s
Jun  1 02:29:24.169: INFO: Pod "pod-subpath-test-configmap-4sc7": Phase="Running", Reason="", readiness=true. Elapsed: 8.020575359s
Jun  1 02:29:26.173: INFO: Pod "pod-subpath-test-configmap-4sc7": Phase="Running", Reason="", readiness=true. Elapsed: 10.024241567s
Jun  1 02:29:28.177: INFO: Pod "pod-subpath-test-configmap-4sc7": Phase="Running", Reason="", readiness=true. Elapsed: 12.028311015s
Jun  1 02:29:30.180: INFO: Pod "pod-subpath-test-configmap-4sc7": Phase="Running", Reason="", readiness=true. Elapsed: 14.031818549s
Jun  1 02:29:32.184: INFO: Pod "pod-subpath-test-configmap-4sc7": Phase="Running", Reason="", readiness=true. Elapsed: 16.035215267s
Jun  1 02:29:34.188: INFO: Pod "pod-subpath-test-configmap-4sc7": Phase="Running", Reason="", readiness=true. Elapsed: 18.039217508s
Jun  1 02:29:36.191: INFO: Pod "pod-subpath-test-configmap-4sc7": Phase="Running", Reason="", readiness=true. Elapsed: 20.042751134s
Jun  1 02:29:38.195: INFO: Pod "pod-subpath-test-configmap-4sc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.047094069s
STEP: Saw pod success
Jun  1 02:29:38.196: INFO: Pod "pod-subpath-test-configmap-4sc7" satisfied condition "Succeeded or Failed"
Jun  1 02:29:38.207: INFO: Trying to get logs from node kind-worker2 pod pod-subpath-test-configmap-4sc7 container test-container-subpath-configmap-4sc7: <nil>
STEP: delete the pod
Jun  1 02:29:38.230: INFO: Waiting for pod pod-subpath-test-configmap-4sc7 to disappear
Jun  1 02:29:38.232: INFO: Pod pod-subpath-test-configmap-4sc7 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-4sc7
Jun  1 02:29:38.232: INFO: Deleting pod "pod-subpath-test-configmap-4sc7" in namespace "subpath-1762"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Jun  1 02:29:38.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1762" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":292,"completed":153,"skipped":2505,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 24 lines ...
  test/e2e/framework/framework.go:175
Jun  1 02:29:43.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8" for this suite.
STEP: Destroying namespace "webhook-8-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":292,"completed":154,"skipped":2506,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 3 lines ...
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  test/e2e/common/pods.go:179
[It] should contain environment variables for services [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Jun  1 02:29:48.058: INFO: Waiting up to 5m0s for pod "client-envvars-3e5e5770-e9f1-4ec3-9901-c04ef6d7fb4e" in namespace "pods-7001" to be "Succeeded or Failed"
Jun  1 02:29:48.064: INFO: Pod "client-envvars-3e5e5770-e9f1-4ec3-9901-c04ef6d7fb4e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.237049ms
Jun  1 02:29:50.068: INFO: Pod "client-envvars-3e5e5770-e9f1-4ec3-9901-c04ef6d7fb4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010264529s
STEP: Saw pod success
Jun  1 02:29:50.068: INFO: Pod "client-envvars-3e5e5770-e9f1-4ec3-9901-c04ef6d7fb4e" satisfied condition "Succeeded or Failed"
Jun  1 02:29:50.071: INFO: Trying to get logs from node kind-worker2 pod client-envvars-3e5e5770-e9f1-4ec3-9901-c04ef6d7fb4e container env3cont: <nil>
STEP: delete the pod
Jun  1 02:29:50.085: INFO: Waiting for pod client-envvars-3e5e5770-e9f1-4ec3-9901-c04ef6d7fb4e to disappear
Jun  1 02:29:50.088: INFO: Pod client-envvars-3e5e5770-e9f1-4ec3-9901-c04ef6d7fb4e no longer exists
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Jun  1 02:29:50.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7001" for this suite.
•{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":292,"completed":155,"skipped":2530,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Jun  1 02:29:50.127: INFO: Waiting up to 5m0s for pod "busybox-user-65534-94a428f3-f8af-4fbf-ae4e-e0b4c5f58b9d" in namespace "security-context-test-1380" to be "Succeeded or Failed"
Jun  1 02:29:50.129: INFO: Pod "busybox-user-65534-94a428f3-f8af-4fbf-ae4e-e0b4c5f58b9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085927ms
Jun  1 02:29:52.132: INFO: Pod "busybox-user-65534-94a428f3-f8af-4fbf-ae4e-e0b4c5f58b9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00551765s
Jun  1 02:29:52.132: INFO: Pod "busybox-user-65534-94a428f3-f8af-4fbf-ae4e-e0b4c5f58b9d" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Jun  1 02:29:52.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1380" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":156,"skipped":2548,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 19 lines ...
Jun  1 02:30:12.176: INFO: The status of Pod test-webserver-8257110f-4079-499a-919d-8b5df96a7a97 is Running (Ready = true)
Jun  1 02:30:12.180: INFO: Container started at 2020-06-01 02:29:53 +0000 UTC, pod became ready at 2020-06-01 02:30:11 +0000 UTC
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Jun  1 02:30:12.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-140" for this suite.
•{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":292,"completed":157,"skipped":2554,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 34 lines ...
Jun  1 02:30:17.543: INFO: stdout: "service/rm3 exposed\n"
Jun  1 02:30:17.547: INFO: Service rm3 in namespace kubectl-713 found.
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 02:30:19.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-713" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":292,"completed":158,"skipped":2563,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 7 lines ...
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Jun  1 02:31:19.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3460" for this suite.
•{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":292,"completed":159,"skipped":2566,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 02:31:19.601: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jun  1 02:31:19.631: INFO: Waiting up to 5m0s for pod "pod-da4e3177-9792-40e7-85ed-dd19a752e524" in namespace "emptydir-8842" to be "Succeeded or Failed"
Jun  1 02:31:19.633: INFO: Pod "pod-da4e3177-9792-40e7-85ed-dd19a752e524": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180491ms
Jun  1 02:31:21.639: INFO: Pod "pod-da4e3177-9792-40e7-85ed-dd19a752e524": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00791806s
STEP: Saw pod success
Jun  1 02:31:21.639: INFO: Pod "pod-da4e3177-9792-40e7-85ed-dd19a752e524" satisfied condition "Succeeded or Failed"
Jun  1 02:31:21.642: INFO: Trying to get logs from node kind-worker2 pod pod-da4e3177-9792-40e7-85ed-dd19a752e524 container test-container: <nil>
STEP: delete the pod
Jun  1 02:31:21.672: INFO: Waiting for pod pod-da4e3177-9792-40e7-85ed-dd19a752e524 to disappear
Jun  1 02:31:21.675: INFO: Pod pod-da4e3177-9792-40e7-85ed-dd19a752e524 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 02:31:21.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8842" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":160,"skipped":2569,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-2b8cbcc2-8fcb-4f1e-9517-7882ac0ae93a
STEP: Creating a pod to test consume secrets
Jun  1 02:31:21.727: INFO: Waiting up to 5m0s for pod "pod-secrets-85d93417-2a33-400f-9591-23f4a139ccf8" in namespace "secrets-3269" to be "Succeeded or Failed"
Jun  1 02:31:21.734: INFO: Pod "pod-secrets-85d93417-2a33-400f-9591-23f4a139ccf8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.343352ms
Jun  1 02:31:23.738: INFO: Pod "pod-secrets-85d93417-2a33-400f-9591-23f4a139ccf8": Phase="Running", Reason="", readiness=true. Elapsed: 2.010318715s
Jun  1 02:31:25.742: INFO: Pod "pod-secrets-85d93417-2a33-400f-9591-23f4a139ccf8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014215853s
STEP: Saw pod success
Jun  1 02:31:25.742: INFO: Pod "pod-secrets-85d93417-2a33-400f-9591-23f4a139ccf8" satisfied condition "Succeeded or Failed"
Jun  1 02:31:25.744: INFO: Trying to get logs from node kind-worker2 pod pod-secrets-85d93417-2a33-400f-9591-23f4a139ccf8 container secret-volume-test: <nil>
STEP: delete the pod
Jun  1 02:31:25.757: INFO: Waiting for pod pod-secrets-85d93417-2a33-400f-9591-23f4a139ccf8 to disappear
Jun  1 02:31:25.759: INFO: Pod pod-secrets-85d93417-2a33-400f-9591-23f4a139ccf8 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Jun  1 02:31:25.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3269" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":161,"skipped":2574,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 35 lines ...
Jun  1 02:33:38.917: INFO: Deleting pod "var-expansion-c721eb05-cdf7-4d85-880c-eab60b74c4d1" in namespace "var-expansion-3204"
Jun  1 02:33:38.923: INFO: Wait up to 5m0s for pod "var-expansion-c721eb05-cdf7-4d85-880c-eab60b74c4d1" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Jun  1 02:34:20.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3204" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]","total":292,"completed":162,"skipped":2608,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-map-dc69420b-2ec9-4471-9dd6-23552ae54c3e
STEP: Creating a pod to test consume secrets
Jun  1 02:34:20.971: INFO: Waiting up to 5m0s for pod "pod-secrets-df00cef0-0254-4c76-8ced-cb536091cd7f" in namespace "secrets-9600" to be "Succeeded or Failed"
Jun  1 02:34:20.973: INFO: Pod "pod-secrets-df00cef0-0254-4c76-8ced-cb536091cd7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025059ms
Jun  1 02:34:22.978: INFO: Pod "pod-secrets-df00cef0-0254-4c76-8ced-cb536091cd7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006056875s
STEP: Saw pod success
Jun  1 02:34:22.978: INFO: Pod "pod-secrets-df00cef0-0254-4c76-8ced-cb536091cd7f" satisfied condition "Succeeded or Failed"
Jun  1 02:34:22.982: INFO: Trying to get logs from node kind-worker pod pod-secrets-df00cef0-0254-4c76-8ced-cb536091cd7f container secret-volume-test: <nil>
STEP: delete the pod
Jun  1 02:34:23.025: INFO: Waiting for pod pod-secrets-df00cef0-0254-4c76-8ced-cb536091cd7f to disappear
Jun  1 02:34:23.029: INFO: Pod pod-secrets-df00cef0-0254-4c76-8ced-cb536091cd7f no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Jun  1 02:34:23.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9600" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":163,"skipped":2611,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 12 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-544
STEP: Creating statefulset with conflicting port in namespace statefulset-544
STEP: Waiting until pod test-pod will start running in namespace statefulset-544
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-544
Jun  1 02:34:27.099: INFO: Observed stateful pod in namespace: statefulset-544, name: ss-0, uid: f1b74429-a529-4c75-a64d-437b4c19015e, status phase: Pending. Waiting for statefulset controller to delete.
Jun  1 02:34:27.291: INFO: Observed stateful pod in namespace: statefulset-544, name: ss-0, uid: f1b74429-a529-4c75-a64d-437b4c19015e, status phase: Failed. Waiting for statefulset controller to delete.
Jun  1 02:34:27.298: INFO: Observed stateful pod in namespace: statefulset-544, name: ss-0, uid: f1b74429-a529-4c75-a64d-437b4c19015e, status phase: Failed. Waiting for statefulset controller to delete.
Jun  1 02:34:27.303: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-544
STEP: Removing pod with conflicting port in namespace statefulset-544
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-544 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/apps/statefulset.go:114
Jun  1 02:34:31.324: INFO: Deleting all statefulset in ns statefulset-544
Jun  1 02:34:31.328: INFO: Scaling statefulset ss to 0
Jun  1 02:34:41.340: INFO: Waiting for statefulset status.replicas updated to 0
Jun  1 02:34:41.342: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Jun  1 02:34:41.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-544" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":292,"completed":164,"skipped":2616,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSS
------------------------------
[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] 
  evicts pods with minTolerationSeconds [Disruptive] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
... skipping 19 lines ...
Jun  1 02:36:19.499: INFO: Noticed Pod "taint-eviction-b2" gets evicted.
STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute
[AfterEach] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
  test/e2e/framework/framework.go:175
Jun  1 02:36:19.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "taint-multiple-pods-7686" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","total":292,"completed":165,"skipped":2619,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 02:36:19.523: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir volume type on node default medium
Jun  1 02:36:19.572: INFO: Waiting up to 5m0s for pod "pod-639fb3f2-5e5e-4c5c-baf0-0b7f26165d5b" in namespace "emptydir-2905" to be "Succeeded or Failed"
Jun  1 02:36:19.575: INFO: Pod "pod-639fb3f2-5e5e-4c5c-baf0-0b7f26165d5b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.007512ms
Jun  1 02:36:21.579: INFO: Pod "pod-639fb3f2-5e5e-4c5c-baf0-0b7f26165d5b": Phase="Running", Reason="", readiness=true. Elapsed: 2.007106675s
Jun  1 02:36:23.584: INFO: Pod "pod-639fb3f2-5e5e-4c5c-baf0-0b7f26165d5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012042587s
STEP: Saw pod success
Jun  1 02:36:23.584: INFO: Pod "pod-639fb3f2-5e5e-4c5c-baf0-0b7f26165d5b" satisfied condition "Succeeded or Failed"
Jun  1 02:36:23.588: INFO: Trying to get logs from node kind-worker pod pod-639fb3f2-5e5e-4c5c-baf0-0b7f26165d5b container test-container: <nil>
STEP: delete the pod
Jun  1 02:36:23.622: INFO: Waiting for pod pod-639fb3f2-5e5e-4c5c-baf0-0b7f26165d5b to disappear
Jun  1 02:36:23.625: INFO: Pod pod-639fb3f2-5e5e-4c5c-baf0-0b7f26165d5b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 02:36:23.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2905" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":166,"skipped":2626,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 11 lines ...
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 02:36:41.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3624" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":292,"completed":167,"skipped":2683,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 02:36:41.719: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jun  1 02:36:41.747: INFO: Waiting up to 5m0s for pod "pod-15776182-6f6d-4598-afad-740ba49be41b" in namespace "emptydir-2557" to be "Succeeded or Failed"
Jun  1 02:36:41.749: INFO: Pod "pod-15776182-6f6d-4598-afad-740ba49be41b": Phase="Pending", Reason="", readiness=false. Elapsed: 1.784026ms
Jun  1 02:36:43.754: INFO: Pod "pod-15776182-6f6d-4598-afad-740ba49be41b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006117943s
Jun  1 02:36:45.758: INFO: Pod "pod-15776182-6f6d-4598-afad-740ba49be41b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010322669s
STEP: Saw pod success
Jun  1 02:36:45.758: INFO: Pod "pod-15776182-6f6d-4598-afad-740ba49be41b" satisfied condition "Succeeded or Failed"
Jun  1 02:36:45.762: INFO: Trying to get logs from node kind-worker pod pod-15776182-6f6d-4598-afad-740ba49be41b container test-container: <nil>
STEP: delete the pod
Jun  1 02:36:45.777: INFO: Waiting for pod pod-15776182-6f6d-4598-afad-740ba49be41b to disappear
Jun  1 02:36:45.779: INFO: Pod pod-15776182-6f6d-4598-afad-740ba49be41b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 02:36:45.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2557" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":168,"skipped":2691,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 9 lines ...
[It] should have an terminated reason [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Jun  1 02:36:49.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2133" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":292,"completed":169,"skipped":2695,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}

------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 5 lines ...
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
Jun  1 02:36:53.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8392" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":292,"completed":170,"skipped":2695,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 25 lines ...
Jun  1 02:36:58.952: INFO: Pod "test-cleanup-deployment-6688745694-9jnj5" is not available:
&Pod{ObjectMeta:{test-cleanup-deployment-6688745694-9jnj5 test-cleanup-deployment-6688745694- deployment-7998 /api/v1/namespaces/deployment-7998/pods/test-cleanup-deployment-6688745694-9jnj5 57d0cbf5-7cda-47f6-9c1c-91f2ce081c37 23529 0 2020-06-01 02:36:58 +0000 UTC <nil> <nil> map[name:cleanup-pod pod-template-hash:6688745694] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-6688745694 81e49487-69cc-4b4c-a7a6-d841030e34fe 0xc00147dc37 0xc00147dc38}] []  [{kube-controller-manager Update v1 2020-06-01 02:36:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81e49487-69cc-4b4c-a7a6-d841030e34fe\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2gvrb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2gvrb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2gvrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 02:36:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Jun  1 02:36:58.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7998" for this suite.
•{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":292,"completed":171,"skipped":2706,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  1 02:36:58.982: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
Jun  1 02:38:59.023: INFO: Deleting pod "var-expansion-62f4fdb7-a271-45b8-aeb6-1d21cbe5f3f5" in namespace "var-expansion-2636"
Jun  1 02:38:59.029: INFO: Wait up to 5m0s for pod "var-expansion-62f4fdb7-a271-45b8-aeb6-1d21cbe5f3f5" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Jun  1 02:39:11.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2636" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":292,"completed":172,"skipped":2758,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a volume subpath [sig-storage] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Jun  1 02:39:11.045: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a volume subpath [sig-storage] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in volume subpath
Jun  1 02:39:11.072: INFO: Waiting up to 5m0s for pod "var-expansion-409bf861-3e19-40a5-bc30-6bf6cb1302af" in namespace "var-expansion-7219" to be "Succeeded or Failed"
Jun  1 02:39:11.075: INFO: Pod "var-expansion-409bf861-3e19-40a5-bc30-6bf6cb1302af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.844767ms
Jun  1 02:39:13.079: INFO: Pod "var-expansion-409bf861-3e19-40a5-bc30-6bf6cb1302af": Phase="Running", Reason="", readiness=true. Elapsed: 2.006708098s
Jun  1 02:39:15.083: INFO: Pod "var-expansion-409bf861-3e19-40a5-bc30-6bf6cb1302af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010509153s
STEP: Saw pod success
Jun  1 02:39:15.083: INFO: Pod "var-expansion-409bf861-3e19-40a5-bc30-6bf6cb1302af" satisfied condition "Succeeded or Failed"
Jun  1 02:39:15.086: INFO: Trying to get logs from node kind-worker pod var-expansion-409bf861-3e19-40a5-bc30-6bf6cb1302af container dapi-container: <nil>
STEP: delete the pod
Jun  1 02:39:15.109: INFO: Waiting for pod var-expansion-409bf861-3e19-40a5-bc30-6bf6cb1302af to disappear
Jun  1 02:39:15.112: INFO: Pod var-expansion-409bf861-3e19-40a5-bc30-6bf6cb1302af no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Jun  1 02:39:15.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7219" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":292,"completed":173,"skipped":2778,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Jun  1 02:39:15.118: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Jun  1 02:39:15.145: INFO: Waiting up to 5m0s for pod "downward-api-3aa5b4c7-d71b-4f4e-bd13-f8996edfc045" in namespace "downward-api-7241" to be "Succeeded or Failed"
Jun  1 02:39:15.148: INFO: Pod "downward-api-3aa5b4c7-d71b-4f4e-bd13-f8996edfc045": Phase="Pending", Reason="", readiness=false. Elapsed: 2.637386ms
Jun  1 02:39:17.152: INFO: Pod "downward-api-3aa5b4c7-d71b-4f4e-bd13-f8996edfc045": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007112754s
Jun  1 02:39:19.157: INFO: Pod "downward-api-3aa5b4c7-d71b-4f4e-bd13-f8996edfc045": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011618454s
STEP: Saw pod success
Jun  1 02:39:19.157: INFO: Pod "downward-api-3aa5b4c7-d71b-4f4e-bd13-f8996edfc045" satisfied condition "Succeeded or Failed"
Jun  1 02:39:19.159: INFO: Trying to get logs from node kind-worker pod downward-api-3aa5b4c7-d71b-4f4e-bd13-f8996edfc045 container dapi-container: <nil>
STEP: delete the pod
Jun  1 02:39:19.181: INFO: Waiting for pod downward-api-3aa5b4c7-d71b-4f4e-bd13-f8996edfc045 to disappear
Jun  1 02:39:19.185: INFO: Pod downward-api-3aa5b4c7-d71b-4f4e-bd13-f8996edfc045 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Jun  1 02:39:19.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7241" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":292,"completed":174,"skipped":2787,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Aggregator
... skipping 19 lines ...
[AfterEach] [sig-api-machinery] Aggregator
  test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  test/e2e/framework/framework.go:175
Jun  1 02:39:35.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-343" for this suite.
•{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":292,"completed":175,"skipped":2791,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Jun  1 02:39:39.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8251" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":292,"completed":176,"skipped":2810,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 29 lines ...
Jun  1 02:40:06.617: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jun  1 02:40:07.740: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
Jun  1 02:40:07.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-753" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":177,"skipped":2816,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Jun  1 02:40:11.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3832" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":292,"completed":178,"skipped":2821,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
S
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Jun  1 02:40:22.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8205" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":292,"completed":179,"skipped":2822,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 02:40:22.931: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jun  1 02:40:22.986: INFO: Waiting up to 5m0s for pod "pod-47f31926-179c-4194-a060-929d1c6fe965" in namespace "emptydir-6702" to be "Succeeded or Failed"
Jun  1 02:40:22.994: INFO: Pod "pod-47f31926-179c-4194-a060-929d1c6fe965": Phase="Pending", Reason="", readiness=false. Elapsed: 8.188502ms
Jun  1 02:40:24.998: INFO: Pod "pod-47f31926-179c-4194-a060-929d1c6fe965": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012473016s
STEP: Saw pod success
Jun  1 02:40:24.998: INFO: Pod "pod-47f31926-179c-4194-a060-929d1c6fe965" satisfied condition "Succeeded or Failed"
Jun  1 02:40:25.001: INFO: Trying to get logs from node kind-worker pod pod-47f31926-179c-4194-a060-929d1c6fe965 container test-container: <nil>
STEP: delete the pod
Jun  1 02:40:25.014: INFO: Waiting for pod pod-47f31926-179c-4194-a060-929d1c6fe965 to disappear
Jun  1 02:40:25.016: INFO: Pod pod-47f31926-179c-4194-a060-929d1c6fe965 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 02:40:25.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6702" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":180,"skipped":2830,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 15 lines ...
  test/e2e/framework/framework.go:175
Jun  1 02:40:31.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-4807" for this suite.
STEP: Destroying namespace "nsdeletetest-2257" for this suite.
Jun  1 02:40:31.156: INFO: Namespace nsdeletetest-2257 was already deleted
STEP: Destroying namespace "nsdeletetest-8658" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":292,"completed":181,"skipped":2841,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 02:40:31.159: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jun  1 02:40:31.189: INFO: Waiting up to 5m0s for pod "pod-035b41e5-99a0-4b43-942f-35e4aa1ed24a" in namespace "emptydir-1586" to be "Succeeded or Failed"
Jun  1 02:40:31.192: INFO: Pod "pod-035b41e5-99a0-4b43-942f-35e4aa1ed24a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.577334ms
Jun  1 02:40:33.196: INFO: Pod "pod-035b41e5-99a0-4b43-942f-35e4aa1ed24a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006696707s
STEP: Saw pod success
Jun  1 02:40:33.196: INFO: Pod "pod-035b41e5-99a0-4b43-942f-35e4aa1ed24a" satisfied condition "Succeeded or Failed"
Jun  1 02:40:33.199: INFO: Trying to get logs from node kind-worker pod pod-035b41e5-99a0-4b43-942f-35e4aa1ed24a container test-container: <nil>
STEP: delete the pod
Jun  1 02:40:33.213: INFO: Waiting for pod pod-035b41e5-99a0-4b43-942f-35e4aa1ed24a to disappear
Jun  1 02:40:33.216: INFO: Pod pod-035b41e5-99a0-4b43-942f-35e4aa1ed24a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 02:40:33.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1586" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":182,"skipped":2859,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 12 lines ...
STEP: Creating secret with name s-test-opt-create-16bf07c5-7e0a-418c-a2b3-142df26f4a7f
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Jun  1 02:41:59.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4741" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":183,"skipped":2874,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSS
------------------------------
[sig-network] Services 
  should find a service from listing all namespaces [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 10 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 02:41:59.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7905" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":292,"completed":184,"skipped":2882,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 13 lines ...
Jun  1 02:42:01.725: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
Jun  1 02:42:02.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2352" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":292,"completed":185,"skipped":2887,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 02:42:02.742: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on node default medium
Jun  1 02:42:02.770: INFO: Waiting up to 5m0s for pod "pod-82622acb-af92-42b0-8e83-5d6244513cba" in namespace "emptydir-8483" to be "Succeeded or Failed"
Jun  1 02:42:02.772: INFO: Pod "pod-82622acb-af92-42b0-8e83-5d6244513cba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.406352ms
Jun  1 02:42:04.781: INFO: Pod "pod-82622acb-af92-42b0-8e83-5d6244513cba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011470613s
Jun  1 02:42:06.785: INFO: Pod "pod-82622acb-af92-42b0-8e83-5d6244513cba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015138878s
STEP: Saw pod success
Jun  1 02:42:06.785: INFO: Pod "pod-82622acb-af92-42b0-8e83-5d6244513cba" satisfied condition "Succeeded or Failed"
Jun  1 02:42:06.789: INFO: Trying to get logs from node kind-worker pod pod-82622acb-af92-42b0-8e83-5d6244513cba container test-container: <nil>
STEP: delete the pod
Jun  1 02:42:06.809: INFO: Waiting for pod pod-82622acb-af92-42b0-8e83-5d6244513cba to disappear
Jun  1 02:42:06.812: INFO: Pod pod-82622acb-af92-42b0-8e83-5d6244513cba no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 02:42:06.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8483" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":186,"skipped":2922,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-95e7a955-a467-4c23-83ee-2539f2fa2047
STEP: Creating a pod to test consume secrets
Jun  1 02:42:06.849: INFO: Waiting up to 5m0s for pod "pod-secrets-6d7d497f-ee1b-40fa-a9db-a9ac991c15b5" in namespace "secrets-4406" to be "Succeeded or Failed"
Jun  1 02:42:06.852: INFO: Pod "pod-secrets-6d7d497f-ee1b-40fa-a9db-a9ac991c15b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.377983ms
Jun  1 02:42:08.856: INFO: Pod "pod-secrets-6d7d497f-ee1b-40fa-a9db-a9ac991c15b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006743538s
Jun  1 02:42:10.860: INFO: Pod "pod-secrets-6d7d497f-ee1b-40fa-a9db-a9ac991c15b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010383466s
STEP: Saw pod success
Jun  1 02:42:10.860: INFO: Pod "pod-secrets-6d7d497f-ee1b-40fa-a9db-a9ac991c15b5" satisfied condition "Succeeded or Failed"
Jun  1 02:42:10.862: INFO: Trying to get logs from node kind-worker pod pod-secrets-6d7d497f-ee1b-40fa-a9db-a9ac991c15b5 container secret-env-test: <nil>
STEP: delete the pod
Jun  1 02:42:10.876: INFO: Waiting for pod pod-secrets-6d7d497f-ee1b-40fa-a9db-a9ac991c15b5 to disappear
Jun  1 02:42:10.878: INFO: Pod pod-secrets-6d7d497f-ee1b-40fa-a9db-a9ac991c15b5 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
Jun  1 02:42:10.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4406" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":292,"completed":187,"skipped":2958,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}

------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 02:42:10.913: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f603025e-e96f-423d-9a5d-4bbb49b4efb3" in namespace "downward-api-9668" to be "Succeeded or Failed"
Jun  1 02:42:10.916: INFO: Pod "downwardapi-volume-f603025e-e96f-423d-9a5d-4bbb49b4efb3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.21536ms
Jun  1 02:42:12.921: INFO: Pod "downwardapi-volume-f603025e-e96f-423d-9a5d-4bbb49b4efb3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007375533s
Jun  1 02:42:14.924: INFO: Pod "downwardapi-volume-f603025e-e96f-423d-9a5d-4bbb49b4efb3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01071769s
STEP: Saw pod success
Jun  1 02:42:14.924: INFO: Pod "downwardapi-volume-f603025e-e96f-423d-9a5d-4bbb49b4efb3" satisfied condition "Succeeded or Failed"
Jun  1 02:42:14.927: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-f603025e-e96f-423d-9a5d-4bbb49b4efb3 container client-container: <nil>
STEP: delete the pod
Jun  1 02:42:14.944: INFO: Waiting for pod downwardapi-volume-f603025e-e96f-423d-9a5d-4bbb49b4efb3 to disappear
Jun  1 02:42:14.946: INFO: Pod downwardapi-volume-f603025e-e96f-423d-9a5d-4bbb49b4efb3 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Jun  1 02:42:14.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9668" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":292,"completed":188,"skipped":2958,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
S
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 12 lines ...
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 02:42:14.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7557" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":292,"completed":189,"skipped":2959,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
STEP: Updating configmap configmap-test-upd-3b89cb08-f0b2-428d-87cb-3027f4b56869
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 02:43:33.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5316" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":190,"skipped":2961,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 02:43:33.373: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d202a2b2-e4bc-4413-9a42-ff693ce537dd" in namespace "downward-api-287" to be "Succeeded or Failed"
Jun  1 02:43:33.375: INFO: Pod "downwardapi-volume-d202a2b2-e4bc-4413-9a42-ff693ce537dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.54617ms
Jun  1 02:43:35.379: INFO: Pod "downwardapi-volume-d202a2b2-e4bc-4413-9a42-ff693ce537dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006162377s
STEP: Saw pod success
Jun  1 02:43:35.379: INFO: Pod "downwardapi-volume-d202a2b2-e4bc-4413-9a42-ff693ce537dd" satisfied condition "Succeeded or Failed"
Jun  1 02:43:35.382: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-d202a2b2-e4bc-4413-9a42-ff693ce537dd container client-container: <nil>
STEP: delete the pod
Jun  1 02:43:35.396: INFO: Waiting for pod downwardapi-volume-d202a2b2-e4bc-4413-9a42-ff693ce537dd to disappear
Jun  1 02:43:35.398: INFO: Pod downwardapi-volume-d202a2b2-e4bc-4413-9a42-ff693ce537dd no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Jun  1 02:43:35.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-287" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":292,"completed":191,"skipped":2981,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 02:43:35.405: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on node default medium
Jun  1 02:43:35.434: INFO: Waiting up to 5m0s for pod "pod-986920f6-2ece-4784-b83c-18d387f7b598" in namespace "emptydir-1372" to be "Succeeded or Failed"
Jun  1 02:43:35.436: INFO: Pod "pod-986920f6-2ece-4784-b83c-18d387f7b598": Phase="Pending", Reason="", readiness=false. Elapsed: 2.585159ms
Jun  1 02:43:37.442: INFO: Pod "pod-986920f6-2ece-4784-b83c-18d387f7b598": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007958782s
STEP: Saw pod success
Jun  1 02:43:37.442: INFO: Pod "pod-986920f6-2ece-4784-b83c-18d387f7b598" satisfied condition "Succeeded or Failed"
Jun  1 02:43:37.445: INFO: Trying to get logs from node kind-worker pod pod-986920f6-2ece-4784-b83c-18d387f7b598 container test-container: <nil>
STEP: delete the pod
Jun  1 02:43:37.457: INFO: Waiting for pod pod-986920f6-2ece-4784-b83c-18d387f7b598 to disappear
Jun  1 02:43:37.460: INFO: Pod pod-986920f6-2ece-4784-b83c-18d387f7b598 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 02:43:37.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1372" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":192,"skipped":2994,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
Jun  1 02:43:37.493: INFO: >>> kubeConfig: /root/.kube/kind-test-config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 02:43:38.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4557" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":292,"completed":193,"skipped":3002,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should print the output to logs [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Jun  1 02:43:40.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9394" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":292,"completed":194,"skipped":3011,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Job
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  1 02:43:40.764: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:175
Jun  1 02:43:48.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-9577" for this suite.
•{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":292,"completed":195,"skipped":3025,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
Jun  1 02:43:50.850: INFO: Initial restart count of pod test-webserver-e05e2060-b43f-4504-abf6-7bfb7d953c48 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Jun  1 02:47:51.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-609" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":292,"completed":196,"skipped":3031,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  1 02:47:51.357: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:162
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating the pod
Jun  1 02:47:51.386: INFO: PodSpec: initContainers in spec.initContainers
Jun  1 02:48:43.025: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-f9773cbd-7598-4262-9d89-a2327f8d0acd", GenerateName:"", Namespace:"init-container-3948", SelfLink:"/api/v1/namespaces/init-container-3948/pods/pod-init-f9773cbd-7598-4262-9d89-a2327f8d0acd", UID:"c3f8cbb7-d9a4-4d72-8a2c-a2fdc02b6776", ResourceVersion:"26560", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63726576471, loc:(*time.Location)(0x8006d20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"386271949"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002344040), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002344060)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002344080), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0023440c0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-fqnn6", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc000b1e940), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-fqnn6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-fqnn6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-fqnn6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001b461a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kind-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000ab6000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001b46260)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001b46500)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001b46508), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001b4650c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726576471, loc:(*time.Location)(0x8006d20)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726576471, loc:(*time.Location)(0x8006d20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726576471, loc:(*time.Location)(0x8006d20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726576471, loc:(*time.Location)(0x8006d20)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.4", PodIP:"10.244.2.126", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.126"}}, StartTime:(*v1.Time)(0xc0023440e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000ab61c0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000ab6230)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://8a8089600126d79c991b49e58dbb7f016033984379721d10e9fa5a3741283135", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002344120), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002344100), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc001b466cf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
Jun  1 02:48:43.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3948" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":292,"completed":197,"skipped":3038,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 12 lines ...
STEP: Creating configMap with name cm-test-opt-create-2372f703-0d97-4152-8652-95e70421f418
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 02:50:17.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-531" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":198,"skipped":3061,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}

------------------------------
[sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath 
  runs ReplicaSets to verify preemption running path [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 32 lines ...
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  test/e2e/framework/framework.go:175
Jun  1 02:51:52.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-preemption-7615" for this suite.
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  test/e2e/scheduling/preemption.go:75
•{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":292,"completed":199,"skipped":3061,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-secret-b4v8
STEP: Creating a pod to test atomic-volume-subpath
Jun  1 02:51:52.788: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-b4v8" in namespace "subpath-7080" to be "Succeeded or Failed"
Jun  1 02:51:52.792: INFO: Pod "pod-subpath-test-secret-b4v8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.897418ms
Jun  1 02:51:54.796: INFO: Pod "pod-subpath-test-secret-b4v8": Phase="Running", Reason="", readiness=true. Elapsed: 2.008150659s
Jun  1 02:51:56.800: INFO: Pod "pod-subpath-test-secret-b4v8": Phase="Running", Reason="", readiness=true. Elapsed: 4.011758268s
Jun  1 02:51:58.803: INFO: Pod "pod-subpath-test-secret-b4v8": Phase="Running", Reason="", readiness=true. Elapsed: 6.014640561s
Jun  1 02:52:00.807: INFO: Pod "pod-subpath-test-secret-b4v8": Phase="Running", Reason="", readiness=true. Elapsed: 8.018913195s
Jun  1 02:52:02.812: INFO: Pod "pod-subpath-test-secret-b4v8": Phase="Running", Reason="", readiness=true. Elapsed: 10.02397757s
Jun  1 02:52:04.816: INFO: Pod "pod-subpath-test-secret-b4v8": Phase="Running", Reason="", readiness=true. Elapsed: 12.027974217s
Jun  1 02:52:06.819: INFO: Pod "pod-subpath-test-secret-b4v8": Phase="Running", Reason="", readiness=true. Elapsed: 14.031186815s
Jun  1 02:52:08.823: INFO: Pod "pod-subpath-test-secret-b4v8": Phase="Running", Reason="", readiness=true. Elapsed: 16.034725977s
Jun  1 02:52:10.827: INFO: Pod "pod-subpath-test-secret-b4v8": Phase="Running", Reason="", readiness=true. Elapsed: 18.039223056s
Jun  1 02:52:12.832: INFO: Pod "pod-subpath-test-secret-b4v8": Phase="Running", Reason="", readiness=true. Elapsed: 20.043334383s
Jun  1 02:52:14.835: INFO: Pod "pod-subpath-test-secret-b4v8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.046951782s
STEP: Saw pod success
Jun  1 02:52:14.835: INFO: Pod "pod-subpath-test-secret-b4v8" satisfied condition "Succeeded or Failed"
Jun  1 02:52:14.838: INFO: Trying to get logs from node kind-worker pod pod-subpath-test-secret-b4v8 container test-container-subpath-secret-b4v8: <nil>
STEP: delete the pod
Jun  1 02:52:14.861: INFO: Waiting for pod pod-subpath-test-secret-b4v8 to disappear
Jun  1 02:52:14.863: INFO: Pod pod-subpath-test-secret-b4v8 no longer exists
STEP: Deleting pod pod-subpath-test-secret-b4v8
Jun  1 02:52:14.863: INFO: Deleting pod "pod-subpath-test-secret-b4v8" in namespace "subpath-7080"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Jun  1 02:52:14.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7080" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":292,"completed":200,"skipped":3075,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 25 lines ...
Jun  1 02:52:30.938: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
Jun  1 02:52:30.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8915" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":292,"completed":201,"skipped":3078,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 17 lines ...
Jun  1 02:52:30.996: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-6625 /api/v1/namespaces/watch-6625/configmaps/e2e-watch-test-watch-closed 842d8593-72a6-4486-937a-670951ac48cd 27527 0 2020-06-01 02:52:30 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-06-01 02:52:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Jun  1 02:52:30.997: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-6625 /api/v1/namespaces/watch-6625/configmaps/e2e-watch-test-watch-closed 842d8593-72a6-4486-937a-670951ac48cd 27528 0 2020-06-01 02:52:30 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-06-01 02:52:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Jun  1 02:52:30.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6625" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":292,"completed":202,"skipped":3104,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicaSet
... skipping 12 lines ...
Jun  1 02:52:36.047: INFO: Trying to dial the pod
Jun  1 02:52:41.060: INFO: Controller my-hostname-basic-385db291-fb5f-4d08-b859-dcbd46e04649: Got expected result from replica 1 [my-hostname-basic-385db291-fb5f-4d08-b859-dcbd46e04649-bpznh]: "my-hostname-basic-385db291-fb5f-4d08-b859-dcbd46e04649-bpznh", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  test/e2e/framework/framework.go:175
Jun  1 02:52:41.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-3530" for this suite.
•{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":292,"completed":203,"skipped":3131,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 5 lines ...
[BeforeEach] [sig-network] Services
  test/e2e/network/service.go:809
[It] should serve a basic endpoint from pods  [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating service endpoint-test2 in namespace services-2671
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2671 to expose endpoints map[]
Jun  1 02:52:41.108: INFO: Get endpoints failed (3.183919ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jun  1 02:52:42.112: INFO: successfully validated that service endpoint-test2 in namespace services-2671 exposes endpoints map[] (1.007047289s elapsed)
STEP: Creating pod pod1 in namespace services-2671
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2671 to expose endpoints map[pod1:[80]]
Jun  1 02:52:45.148: INFO: successfully validated that service endpoint-test2 in namespace services-2671 exposes endpoints map[pod1:[80]] (3.028695497s elapsed)
STEP: Creating pod pod2 in namespace services-2671
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2671 to expose endpoints map[pod1:[80] pod2:[80]]
... skipping 7 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 02:52:49.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2671" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":292,"completed":204,"skipped":3140,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
  test/e2e/framework/framework.go:175
Jun  1 02:52:54.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6937" for this suite.
STEP: Destroying namespace "webhook-6937-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":292,"completed":205,"skipped":3171,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should have session affinity work for NodePort service [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 51 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 02:53:19.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9312" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":292,"completed":206,"skipped":3196,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 02:53:19.383: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bb84405f-3842-4006-ac45-4d2fde52d75f" in namespace "downward-api-9493" to be "Succeeded or Failed"
Jun  1 02:53:19.390: INFO: Pod "downwardapi-volume-bb84405f-3842-4006-ac45-4d2fde52d75f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.206802ms
Jun  1 02:53:21.395: INFO: Pod "downwardapi-volume-bb84405f-3842-4006-ac45-4d2fde52d75f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01131452s
Jun  1 02:53:23.398: INFO: Pod "downwardapi-volume-bb84405f-3842-4006-ac45-4d2fde52d75f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014369396s
STEP: Saw pod success
Jun  1 02:53:23.398: INFO: Pod "downwardapi-volume-bb84405f-3842-4006-ac45-4d2fde52d75f" satisfied condition "Succeeded or Failed"
Jun  1 02:53:23.401: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-bb84405f-3842-4006-ac45-4d2fde52d75f container client-container: <nil>
STEP: delete the pod
Jun  1 02:53:23.413: INFO: Waiting for pod downwardapi-volume-bb84405f-3842-4006-ac45-4d2fde52d75f to disappear
Jun  1 02:53:23.415: INFO: Pod downwardapi-volume-bb84405f-3842-4006-ac45-4d2fde52d75f no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Jun  1 02:53:23.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9493" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":207,"skipped":3208,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Lease
... skipping 5 lines ...
[It] lease API should be available [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Lease
  test/e2e/framework/framework.go:175
Jun  1 02:53:23.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-1143" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":292,"completed":208,"skipped":3261,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 50 lines ...
Jun  1 02:55:25.069: INFO: Waiting for statefulset status.replicas updated to 0
Jun  1 02:55:25.072: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Jun  1 02:55:25.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3107" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":292,"completed":209,"skipped":3264,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 9 lines ...
STEP: Creating the pod
Jun  1 02:55:27.663: INFO: Successfully updated pod "annotationupdate26dcd854-4626-419d-b546-b4383677f567"
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Jun  1 02:55:29.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7101" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":292,"completed":210,"skipped":3282,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
Jun  1 02:55:32.733: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Jun  1 02:55:32.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7570" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":292,"completed":211,"skipped":3286,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 26 lines ...
Jun  1 02:55:36.877: INFO: Pod "test-recreate-deployment-d5667d9c7-tft76" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-tft76 test-recreate-deployment-d5667d9c7- deployment-5517 /api/v1/namespaces/deployment-5517/pods/test-recreate-deployment-d5667d9c7-tft76 b8faf5b2-0524-4fd7-801b-fde56d7fcce8 28916 0 2020-06-01 02:55:36 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 d34281ec-615c-4ad4-85d9-1b20124d0f8d 0xc002c736a0 0xc002c736a1}] []  [{kube-controller-manager Update v1 2020-06-01 02:55:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d34281ec-615c-4ad4-85d9-1b20124d0f8d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-01 02:55:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8h4p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8h4p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8h4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 02:55:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 02:55:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 02:55:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 02:55:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2020-06-01 02:55:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Jun  1 02:55:36.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5517" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":292,"completed":212,"skipped":3297,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 48 lines ...
• [SLOW TEST:308.125 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":292,"completed":213,"skipped":3317,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 7 lines ...
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Jun  1 03:00:50.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9025" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":292,"completed":214,"skipped":3321,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 10 lines ...
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Jun  1 03:00:54.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7308" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":292,"completed":215,"skipped":3329,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 9 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 03:00:54.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3520" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":292,"completed":216,"skipped":3417,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}

------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 47 lines ...
Jun  1 03:00:58.477: INFO: stderr: ""
Jun  1 03:00:58.477: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 03:00:58.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5200" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":292,"completed":217,"skipped":3417,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 12 lines ...
STEP: Creating configMap with name cm-test-opt-create-d45cf6ce-a9f0-48eb-8782-b9782d1953e1
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Jun  1 03:01:06.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7046" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":218,"skipped":3482,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Jun  1 03:01:06.611: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Jun  1 03:01:06.641: INFO: Waiting up to 5m0s for pod "downward-api-fc636139-29e5-454b-992c-b61e63b92a93" in namespace "downward-api-4094" to be "Succeeded or Failed"
Jun  1 03:01:06.643: INFO: Pod "downward-api-fc636139-29e5-454b-992c-b61e63b92a93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.53978ms
Jun  1 03:01:08.652: INFO: Pod "downward-api-fc636139-29e5-454b-992c-b61e63b92a93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011070064s
STEP: Saw pod success
Jun  1 03:01:08.652: INFO: Pod "downward-api-fc636139-29e5-454b-992c-b61e63b92a93" satisfied condition "Succeeded or Failed"
Jun  1 03:01:08.657: INFO: Trying to get logs from node kind-worker2 pod downward-api-fc636139-29e5-454b-992c-b61e63b92a93 container dapi-container: <nil>
STEP: delete the pod
Jun  1 03:01:08.692: INFO: Waiting for pod downward-api-fc636139-29e5-454b-992c-b61e63b92a93 to disappear
Jun  1 03:01:08.695: INFO: Pod downward-api-fc636139-29e5-454b-992c-b61e63b92a93 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Jun  1 03:01:08.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4094" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":292,"completed":219,"skipped":3521,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap configmap-8093/configmap-test-9b744ca9-7d53-4dc7-a410-12a7cea1aa1f
STEP: Creating a pod to test consume configMaps
Jun  1 03:01:08.740: INFO: Waiting up to 5m0s for pod "pod-configmaps-b0cb7e7a-201e-4ccf-8460-181039980d18" in namespace "configmap-8093" to be "Succeeded or Failed"
Jun  1 03:01:08.743: INFO: Pod "pod-configmaps-b0cb7e7a-201e-4ccf-8460-181039980d18": Phase="Pending", Reason="", readiness=false. Elapsed: 3.35785ms
Jun  1 03:01:10.747: INFO: Pod "pod-configmaps-b0cb7e7a-201e-4ccf-8460-181039980d18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006902984s
STEP: Saw pod success
Jun  1 03:01:10.747: INFO: Pod "pod-configmaps-b0cb7e7a-201e-4ccf-8460-181039980d18" satisfied condition "Succeeded or Failed"
Jun  1 03:01:10.749: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-b0cb7e7a-201e-4ccf-8460-181039980d18 container env-test: <nil>
STEP: delete the pod
Jun  1 03:01:10.762: INFO: Waiting for pod pod-configmaps-b0cb7e7a-201e-4ccf-8460-181039980d18 to disappear
Jun  1 03:01:10.765: INFO: Pod pod-configmaps-b0cb7e7a-201e-4ccf-8460-181039980d18 no longer exists
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 03:01:10.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8093" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":292,"completed":220,"skipped":3578,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 38 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
Jun  1 03:01:11.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5141" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":292,"completed":221,"skipped":3604,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 8 lines ...
Jun  1 03:01:11.866: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jun  1 03:01:13.847: INFO: >>> kubeConfig: /root/.kube/kind-test-config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 03:01:26.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5779" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":292,"completed":222,"skipped":3685,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 33 lines ...
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Jun  1 03:01:32.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1102" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":292,"completed":223,"skipped":3687,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 03:01:32.255: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6a60eabd-7d97-4bd5-9c47-3434ca332fac" in namespace "downward-api-2349" to be "Succeeded or Failed"
Jun  1 03:01:32.259: INFO: Pod "downwardapi-volume-6a60eabd-7d97-4bd5-9c47-3434ca332fac": Phase="Pending", Reason="", readiness=false. Elapsed: 3.44771ms
Jun  1 03:01:34.263: INFO: Pod "downwardapi-volume-6a60eabd-7d97-4bd5-9c47-3434ca332fac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007467261s
STEP: Saw pod success
Jun  1 03:01:34.263: INFO: Pod "downwardapi-volume-6a60eabd-7d97-4bd5-9c47-3434ca332fac" satisfied condition "Succeeded or Failed"
Jun  1 03:01:34.266: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-6a60eabd-7d97-4bd5-9c47-3434ca332fac container client-container: <nil>
STEP: delete the pod
Jun  1 03:01:34.280: INFO: Waiting for pod downwardapi-volume-6a60eabd-7d97-4bd5-9c47-3434ca332fac to disappear
Jun  1 03:01:34.283: INFO: Pod downwardapi-volume-6a60eabd-7d97-4bd5-9c47-3434ca332fac no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Jun  1 03:01:34.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2349" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":292,"completed":224,"skipped":3701,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  test/e2e/framework/framework.go:175
Jun  1 03:01:41.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8987" for this suite.
STEP: Destroying namespace "webhook-8987-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":292,"completed":225,"skipped":3706,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Jun  1 03:01:41.976: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test env composition
Jun  1 03:01:42.019: INFO: Waiting up to 5m0s for pod "var-expansion-6d74ebf0-44e4-4d34-bea7-487d68d8ed79" in namespace "var-expansion-1324" to be "Succeeded or Failed"
Jun  1 03:01:42.027: INFO: Pod "var-expansion-6d74ebf0-44e4-4d34-bea7-487d68d8ed79": Phase="Pending", Reason="", readiness=false. Elapsed: 7.273316ms
Jun  1 03:01:44.029: INFO: Pod "var-expansion-6d74ebf0-44e4-4d34-bea7-487d68d8ed79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00959757s
STEP: Saw pod success
Jun  1 03:01:44.029: INFO: Pod "var-expansion-6d74ebf0-44e4-4d34-bea7-487d68d8ed79" satisfied condition "Succeeded or Failed"
Jun  1 03:01:44.031: INFO: Trying to get logs from node kind-worker pod var-expansion-6d74ebf0-44e4-4d34-bea7-487d68d8ed79 container dapi-container: <nil>
STEP: delete the pod
Jun  1 03:01:44.047: INFO: Waiting for pod var-expansion-6d74ebf0-44e4-4d34-bea7-487d68d8ed79 to disappear
Jun  1 03:01:44.049: INFO: Pod var-expansion-6d74ebf0-44e4-4d34-bea7-487d68d8ed79 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Jun  1 03:01:44.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1324" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":292,"completed":226,"skipped":3750,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 8 lines ...
Jun  1 03:01:44.124: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"4a7717c0-14f0-49f0-a630-3dffc81d5975", Controller:(*bool)(0xc004464626), BlockOwnerDeletion:(*bool)(0xc004464627)}}
Jun  1 03:01:44.133: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"78ebe678-012e-46c1-a487-3d0596b466a7", Controller:(*bool)(0xc0043d011e), BlockOwnerDeletion:(*bool)(0xc0043d011f)}}
[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Jun  1 03:01:49.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5892" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":292,"completed":227,"skipped":3770,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Jun  1 03:01:49.147: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in container's args
Jun  1 03:01:49.180: INFO: Waiting up to 5m0s for pod "var-expansion-d6628c85-ec93-4532-8ecb-6571bc9c5720" in namespace "var-expansion-2131" to be "Succeeded or Failed"
Jun  1 03:01:49.183: INFO: Pod "var-expansion-d6628c85-ec93-4532-8ecb-6571bc9c5720": Phase="Pending", Reason="", readiness=false. Elapsed: 2.44904ms
Jun  1 03:01:51.187: INFO: Pod "var-expansion-d6628c85-ec93-4532-8ecb-6571bc9c5720": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006367679s
STEP: Saw pod success
Jun  1 03:01:51.187: INFO: Pod "var-expansion-d6628c85-ec93-4532-8ecb-6571bc9c5720" satisfied condition "Succeeded or Failed"
Jun  1 03:01:51.190: INFO: Trying to get logs from node kind-worker pod var-expansion-d6628c85-ec93-4532-8ecb-6571bc9c5720 container dapi-container: <nil>
STEP: delete the pod
Jun  1 03:01:51.206: INFO: Waiting for pod var-expansion-d6628c85-ec93-4532-8ecb-6571bc9c5720 to disappear
Jun  1 03:01:51.209: INFO: Pod var-expansion-d6628c85-ec93-4532-8ecb-6571bc9c5720 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Jun  1 03:01:51.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2131" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":292,"completed":228,"skipped":3800,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-6a3d21d9-0070-4cca-bf07-6de3ea94b902
STEP: Creating a pod to test consume configMaps
Jun  1 03:01:51.254: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5795e935-7c92-49d0-9ca1-bf5230d32459" in namespace "projected-912" to be "Succeeded or Failed"
Jun  1 03:01:51.256: INFO: Pod "pod-projected-configmaps-5795e935-7c92-49d0-9ca1-bf5230d32459": Phase="Pending", Reason="", readiness=false. Elapsed: 2.474986ms
Jun  1 03:01:53.261: INFO: Pod "pod-projected-configmaps-5795e935-7c92-49d0-9ca1-bf5230d32459": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006686143s
STEP: Saw pod success
Jun  1 03:01:53.261: INFO: Pod "pod-projected-configmaps-5795e935-7c92-49d0-9ca1-bf5230d32459" satisfied condition "Succeeded or Failed"
Jun  1 03:01:53.264: INFO: Trying to get logs from node kind-worker pod pod-projected-configmaps-5795e935-7c92-49d0-9ca1-bf5230d32459 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 03:01:53.280: INFO: Waiting for pod pod-projected-configmaps-5795e935-7c92-49d0-9ca1-bf5230d32459 to disappear
Jun  1 03:01:53.283: INFO: Pod pod-projected-configmaps-5795e935-7c92-49d0-9ca1-bf5230d32459 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Jun  1 03:01:53.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-912" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":292,"completed":229,"skipped":3846,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 44 lines ...
Jun  1 03:02:14.412: INFO: Pod "test-rollover-deployment-7c4fd9c879-dtsvf" is available:
&Pod{ObjectMeta:{test-rollover-deployment-7c4fd9c879-dtsvf test-rollover-deployment-7c4fd9c879- deployment-9644 /api/v1/namespaces/deployment-9644/pods/test-rollover-deployment-7c4fd9c879-dtsvf ea1b30dc-7bd6-4f43-9258-9a563e648dd1 30935 0 2020-06-01 03:02:02 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [{apps/v1 ReplicaSet test-rollover-deployment-7c4fd9c879 19f18765-e51c-4ce8-9136-28d53398ed17 0xc004285ee7 0xc004285ee8}] []  [{kube-controller-manager Update v1 2020-06-01 03:02:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"19f18765-e51c-4ce8-9136-28d53398ed17\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-01 03:02:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.173\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h2sxj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h2sxj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h2sxj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 03:02:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 03:02:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 03:02:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 03:02:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.173,StartTime:2020-06-01 03:02:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-01 03:02:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://70033bf3653c3cc2855647e48ead2ffca74b279c3c58756118259a5dae605b58,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.173,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Jun  1 03:02:14.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9644" for this suite.
•{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":292,"completed":230,"skipped":3873,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 24 lines ...
  test/e2e/framework/framework.go:175
Jun  1 03:02:24.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6944" for this suite.
STEP: Destroying namespace "webhook-6944-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":292,"completed":231,"skipped":3898,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 10 lines ...
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 03:02:40.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5215" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":292,"completed":232,"skipped":3910,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] Pods Extended
... skipping 10 lines ...
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  test/e2e/framework/framework.go:175
Jun  1 03:02:40.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7796" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":292,"completed":233,"skipped":3914,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Jun  1 03:02:42.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3119" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":234,"skipped":3965,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 22 lines ...
Jun  1 03:03:22.612: INFO: Waiting for statefulset status.replicas updated to 0
Jun  1 03:03:22.619: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Jun  1 03:03:22.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-217" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":292,"completed":235,"skipped":3974,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
Jun  1 03:03:28.283: INFO: stderr: ""
Jun  1 03:03:28.283: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5867-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     <empty>\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 03:03:31.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9738" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":292,"completed":236,"skipped":3988,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 44 lines ...
Jun  1 03:04:11.481: INFO: Deleting pod "simpletest.rc-tjl85" in namespace "gc-5775"
Jun  1 03:04:11.503: INFO: Deleting pod "simpletest.rc-zkgs6" in namespace "gc-5775"
[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Jun  1 03:04:11.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5775" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":292,"completed":237,"skipped":3991,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 13 lines ...
Jun  1 03:04:11.674: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-6309 /api/v1/namespaces/watch-6309/configmaps/e2e-watch-test-resource-version 95873e7b-234c-41b0-8f4a-c1888643c596 31784 0 2020-06-01 03:04:11 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-06-01 03:04:11 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Jun  1 03:04:11.675: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-6309 /api/v1/namespaces/watch-6309/configmaps/e2e-watch-test-resource-version 95873e7b-234c-41b0-8f4a-c1888643c596 31785 0 2020-06-01 03:04:11 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-06-01 03:04:11 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Jun  1 03:04:11.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6309" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":292,"completed":238,"skipped":4013,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-map-f281bdf5-8e3f-4e1f-ae26-984be1fb7e77
STEP: Creating a pod to test consume secrets
Jun  1 03:04:11.739: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7bd47bb3-d83c-4cfe-bf63-d4d75a7cb568" in namespace "projected-1615" to be "Succeeded or Failed"
Jun  1 03:04:11.742: INFO: Pod "pod-projected-secrets-7bd47bb3-d83c-4cfe-bf63-d4d75a7cb568": Phase="Pending", Reason="", readiness=false. Elapsed: 2.638731ms
Jun  1 03:04:13.753: INFO: Pod "pod-projected-secrets-7bd47bb3-d83c-4cfe-bf63-d4d75a7cb568": Phase="Running", Reason="", readiness=true. Elapsed: 2.013386128s
Jun  1 03:04:15.756: INFO: Pod "pod-projected-secrets-7bd47bb3-d83c-4cfe-bf63-d4d75a7cb568": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016763618s
STEP: Saw pod success
Jun  1 03:04:15.756: INFO: Pod "pod-projected-secrets-7bd47bb3-d83c-4cfe-bf63-d4d75a7cb568" satisfied condition "Succeeded or Failed"
Jun  1 03:04:15.759: INFO: Trying to get logs from node kind-worker pod pod-projected-secrets-7bd47bb3-d83c-4cfe-bf63-d4d75a7cb568 container projected-secret-volume-test: <nil>
STEP: delete the pod
Jun  1 03:04:15.784: INFO: Waiting for pod pod-projected-secrets-7bd47bb3-d83c-4cfe-bf63-d4d75a7cb568 to disappear
Jun  1 03:04:15.786: INFO: Pod pod-projected-secrets-7bd47bb3-d83c-4cfe-bf63-d4d75a7cb568 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Jun  1 03:04:15.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1615" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":239,"skipped":4023,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}

------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Jun  1 03:04:15.797: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in container's command
Jun  1 03:04:15.841: INFO: Waiting up to 5m0s for pod "var-expansion-8199bac9-d64d-45bb-9577-d93c68a41fe2" in namespace "var-expansion-2069" to be "Succeeded or Failed"
Jun  1 03:04:15.844: INFO: Pod "var-expansion-8199bac9-d64d-45bb-9577-d93c68a41fe2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.108578ms
Jun  1 03:04:17.846: INFO: Pod "var-expansion-8199bac9-d64d-45bb-9577-d93c68a41fe2": Phase="Running", Reason="", readiness=true. Elapsed: 2.005447546s
Jun  1 03:04:19.851: INFO: Pod "var-expansion-8199bac9-d64d-45bb-9577-d93c68a41fe2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010138821s
STEP: Saw pod success
Jun  1 03:04:19.851: INFO: Pod "var-expansion-8199bac9-d64d-45bb-9577-d93c68a41fe2" satisfied condition "Succeeded or Failed"
Jun  1 03:04:19.853: INFO: Trying to get logs from node kind-worker pod var-expansion-8199bac9-d64d-45bb-9577-d93c68a41fe2 container dapi-container: <nil>
STEP: delete the pod
Jun  1 03:04:19.866: INFO: Waiting for pod var-expansion-8199bac9-d64d-45bb-9577-d93c68a41fe2 to disappear
Jun  1 03:04:19.868: INFO: Pod var-expansion-8199bac9-d64d-45bb-9577-d93c68a41fe2 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Jun  1 03:04:19.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2069" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":292,"completed":240,"skipped":4023,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Jun  1 03:04:30.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6553" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":292,"completed":241,"skipped":4033,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}

------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
Jun  1 03:04:34.998: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:04:35.001: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:04:35.010: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:04:35.013: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:04:35.016: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:04:35.018: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:04:35.023: INFO: Lookups using dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6564.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6564.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local jessie_udp@dns-test-service-2.dns-6564.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6564.svc.cluster.local]

Jun  1 03:04:40.027: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:04:40.030: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:04:40.033: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:04:40.036: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:04:40.045: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:04:40.048: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:04:40.050: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:04:40.053: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:04:40.059: INFO: Lookups using dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6564.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6564.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local jessie_udp@dns-test-service-2.dns-6564.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6564.svc.cluster.local]

Jun  1 03:04:45.028: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:04:45.031: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:04:45.035: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:04:45.037: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:04:45.047: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:04:45.049: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:04:45.052: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:04:45.055: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:04:45.061: INFO: Lookups using dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6564.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6564.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local jessie_udp@dns-test-service-2.dns-6564.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6564.svc.cluster.local]

Jun  1 03:04:50.029: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:04:50.032: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:04:50.035: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:04:50.038: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:04:50.046: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:04:50.049: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:04:50.052: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:04:50.054: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:04:50.060: INFO: Lookups using dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6564.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6564.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local jessie_udp@dns-test-service-2.dns-6564.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6564.svc.cluster.local]

Jun  1 03:04:55.027: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:04:55.031: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:04:55.034: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:04:55.037: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:04:55.045: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:04:55.048: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:04:55.051: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:04:55.056: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:04:55.063: INFO: Lookups using dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6564.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6564.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local jessie_udp@dns-test-service-2.dns-6564.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6564.svc.cluster.local]

Jun  1 03:05:00.028: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:05:00.031: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:05:00.034: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:05:00.037: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:05:00.046: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:05:00.049: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:05:00.051: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:05:00.054: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6564.svc.cluster.local from pod dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde: the server could not find the requested resource (get pods dns-test-288ab7f7-9348-427d-b09a-d8665c696fde)
Jun  1 03:05:00.060: INFO: Lookups using dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6564.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6564.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6564.svc.cluster.local jessie_udp@dns-test-service-2.dns-6564.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6564.svc.cluster.local]

Jun  1 03:05:05.061: INFO: DNS probes using dns-6564/dns-test-288ab7f7-9348-427d-b09a-d8665c696fde succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Jun  1 03:05:05.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6564" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":292,"completed":242,"skipped":4033,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 03:05:05.176: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0e321bf5-2049-4de2-aac5-0d13b8a081a8" in namespace "downward-api-8636" to be "Succeeded or Failed"
Jun  1 03:05:05.180: INFO: Pod "downwardapi-volume-0e321bf5-2049-4de2-aac5-0d13b8a081a8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.321584ms
Jun  1 03:05:07.184: INFO: Pod "downwardapi-volume-0e321bf5-2049-4de2-aac5-0d13b8a081a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008027905s
Jun  1 03:05:09.188: INFO: Pod "downwardapi-volume-0e321bf5-2049-4de2-aac5-0d13b8a081a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011663876s
STEP: Saw pod success
Jun  1 03:05:09.188: INFO: Pod "downwardapi-volume-0e321bf5-2049-4de2-aac5-0d13b8a081a8" satisfied condition "Succeeded or Failed"
Jun  1 03:05:09.191: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-0e321bf5-2049-4de2-aac5-0d13b8a081a8 container client-container: <nil>
STEP: delete the pod
Jun  1 03:05:09.211: INFO: Waiting for pod downwardapi-volume-0e321bf5-2049-4de2-aac5-0d13b8a081a8 to disappear
Jun  1 03:05:09.213: INFO: Pod downwardapi-volume-0e321bf5-2049-4de2-aac5-0d13b8a081a8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Jun  1 03:05:09.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8636" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":292,"completed":243,"skipped":4058,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 28 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 03:05:18.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6382" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":292,"completed":244,"skipped":4073,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-5b8904db-3dd1-4a8b-b789-115095caf82d
STEP: Creating a pod to test consume configMaps
Jun  1 03:05:18.823: INFO: Waiting up to 5m0s for pod "pod-configmaps-136ab185-8904-4dd7-b624-1b58c23bd75a" in namespace "configmap-1230" to be "Succeeded or Failed"
Jun  1 03:05:18.826: INFO: Pod "pod-configmaps-136ab185-8904-4dd7-b624-1b58c23bd75a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.453659ms
Jun  1 03:05:20.829: INFO: Pod "pod-configmaps-136ab185-8904-4dd7-b624-1b58c23bd75a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006128608s
STEP: Saw pod success
Jun  1 03:05:20.829: INFO: Pod "pod-configmaps-136ab185-8904-4dd7-b624-1b58c23bd75a" satisfied condition "Succeeded or Failed"
Jun  1 03:05:20.834: INFO: Trying to get logs from node kind-worker pod pod-configmaps-136ab185-8904-4dd7-b624-1b58c23bd75a container configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 03:05:20.857: INFO: Waiting for pod pod-configmaps-136ab185-8904-4dd7-b624-1b58c23bd75a to disappear
Jun  1 03:05:20.861: INFO: Pod pod-configmaps-136ab185-8904-4dd7-b624-1b58c23bd75a no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 03:05:20.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1230" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":245,"skipped":4076,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 26 lines ...
Jun  1 03:06:10.948: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-7101 /api/v1/namespaces/watch-7101/configmaps/e2e-watch-test-configmap-b f566caae-d3f1-4d79-a7e1-028df2da7388 32469 0 2020-06-01 03:06:00 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-06-01 03:06:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Jun  1 03:06:10.948: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-7101 /api/v1/namespaces/watch-7101/configmaps/e2e-watch-test-configmap-b f566caae-d3f1-4d79-a7e1-028df2da7388 32469 0 2020-06-01 03:06:00 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-06-01 03:06:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Jun  1 03:06:20.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7101" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":292,"completed":246,"skipped":4080,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}

------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-6e27b354-9541-49dd-8d8d-6d978973a129
STEP: Creating a pod to test consume configMaps
Jun  1 03:06:20.988: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3eaf74dc-6365-40aa-90de-e9c3e29c48cf" in namespace "projected-466" to be "Succeeded or Failed"
Jun  1 03:06:20.990: INFO: Pod "pod-projected-configmaps-3eaf74dc-6365-40aa-90de-e9c3e29c48cf": Phase="Pending", Reason="", readiness=false. Elapsed: 1.912875ms
Jun  1 03:06:22.995: INFO: Pod "pod-projected-configmaps-3eaf74dc-6365-40aa-90de-e9c3e29c48cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007368041s
STEP: Saw pod success
Jun  1 03:06:22.995: INFO: Pod "pod-projected-configmaps-3eaf74dc-6365-40aa-90de-e9c3e29c48cf" satisfied condition "Succeeded or Failed"
Jun  1 03:06:22.998: INFO: Trying to get logs from node kind-worker pod pod-projected-configmaps-3eaf74dc-6365-40aa-90de-e9c3e29c48cf container projected-configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 03:06:23.019: INFO: Waiting for pod pod-projected-configmaps-3eaf74dc-6365-40aa-90de-e9c3e29c48cf to disappear
Jun  1 03:06:23.021: INFO: Pod pod-projected-configmaps-3eaf74dc-6365-40aa-90de-e9c3e29c48cf no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Jun  1 03:06:23.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-466" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":247,"skipped":4080,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
... skipping 42 lines ...
Jun  1 03:06:32.313: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jun  1 03:06:32.437: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  test/e2e/framework/framework.go:175
Jun  1 03:06:32.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-8048" for this suite.
•{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":248,"skipped":4082,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}

------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 34 lines ...
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Jun  1 03:06:33.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-523" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":292,"completed":249,"skipped":4082,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-3fbbe24d-ac06-4673-8562-abb7c6a288d6
STEP: Creating a pod to test consume configMaps
Jun  1 03:06:33.555: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c132f08e-d208-40ac-85de-19d90d8067f9" in namespace "projected-2417" to be "Succeeded or Failed"
Jun  1 03:06:33.560: INFO: Pod "pod-projected-configmaps-c132f08e-d208-40ac-85de-19d90d8067f9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.811709ms
Jun  1 03:06:35.569: INFO: Pod "pod-projected-configmaps-c132f08e-d208-40ac-85de-19d90d8067f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014316625s
STEP: Saw pod success
Jun  1 03:06:35.569: INFO: Pod "pod-projected-configmaps-c132f08e-d208-40ac-85de-19d90d8067f9" satisfied condition "Succeeded or Failed"
Jun  1 03:06:35.573: INFO: Trying to get logs from node kind-worker2 pod pod-projected-configmaps-c132f08e-d208-40ac-85de-19d90d8067f9 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 03:06:35.598: INFO: Waiting for pod pod-projected-configmaps-c132f08e-d208-40ac-85de-19d90d8067f9 to disappear
Jun  1 03:06:35.601: INFO: Pod pod-projected-configmaps-c132f08e-d208-40ac-85de-19d90d8067f9 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Jun  1 03:06:35.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2417" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":292,"completed":250,"skipped":4089,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 28 lines ...
Jun  1 03:06:59.852: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jun  1 03:07:00.025: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
Jun  1 03:07:00.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9131" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":251,"skipped":4103,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
Jun  1 03:07:00.277: INFO: stderr: ""
Jun  1 03:07:00.277: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 03:07:00.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-736" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":292,"completed":252,"skipped":4106,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-9d530ddb-3969-49a9-8d59-d797686f01d1
STEP: Creating a pod to test consume secrets
Jun  1 03:07:00.321: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6f7cb177-cd88-410c-b229-0c0a67735265" in namespace "projected-3612" to be "Succeeded or Failed"
Jun  1 03:07:00.324: INFO: Pod "pod-projected-secrets-6f7cb177-cd88-410c-b229-0c0a67735265": Phase="Pending", Reason="", readiness=false. Elapsed: 2.646898ms
Jun  1 03:07:02.328: INFO: Pod "pod-projected-secrets-6f7cb177-cd88-410c-b229-0c0a67735265": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006701395s
Jun  1 03:07:04.332: INFO: Pod "pod-projected-secrets-6f7cb177-cd88-410c-b229-0c0a67735265": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010933606s
STEP: Saw pod success
Jun  1 03:07:04.332: INFO: Pod "pod-projected-secrets-6f7cb177-cd88-410c-b229-0c0a67735265" satisfied condition "Succeeded or Failed"
Jun  1 03:07:04.335: INFO: Trying to get logs from node kind-worker2 pod pod-projected-secrets-6f7cb177-cd88-410c-b229-0c0a67735265 container projected-secret-volume-test: <nil>
STEP: delete the pod
Jun  1 03:07:04.349: INFO: Waiting for pod pod-projected-secrets-6f7cb177-cd88-410c-b229-0c0a67735265 to disappear
Jun  1 03:07:04.352: INFO: Pod pod-projected-secrets-6f7cb177-cd88-410c-b229-0c0a67735265 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Jun  1 03:07:04.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3612" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":253,"skipped":4108,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 8 lines ...
Jun  1 03:07:04.386: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jun  1 03:07:07.911: INFO: >>> kubeConfig: /root/.kube/kind-test-config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 03:07:21.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-346" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":292,"completed":254,"skipped":4114,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 24 lines ...
  test/e2e/framework/framework.go:175
Jun  1 03:07:28.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1724" for this suite.
STEP: Destroying namespace "webhook-1724-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":292,"completed":255,"skipped":4148,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-7ffa45db-363b-452a-a8a6-77fde30433ef
STEP: Creating a pod to test consume configMaps
Jun  1 03:07:28.482: INFO: Waiting up to 5m0s for pod "pod-configmaps-ece316f2-3894-4833-aef4-da6ecc6bf4cd" in namespace "configmap-8575" to be "Succeeded or Failed"
Jun  1 03:07:28.489: INFO: Pod "pod-configmaps-ece316f2-3894-4833-aef4-da6ecc6bf4cd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.718886ms
Jun  1 03:07:30.493: INFO: Pod "pod-configmaps-ece316f2-3894-4833-aef4-da6ecc6bf4cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010668734s
Jun  1 03:07:32.497: INFO: Pod "pod-configmaps-ece316f2-3894-4833-aef4-da6ecc6bf4cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01487902s
STEP: Saw pod success
Jun  1 03:07:32.497: INFO: Pod "pod-configmaps-ece316f2-3894-4833-aef4-da6ecc6bf4cd" satisfied condition "Succeeded or Failed"
Jun  1 03:07:32.500: INFO: Trying to get logs from node kind-worker pod pod-configmaps-ece316f2-3894-4833-aef4-da6ecc6bf4cd container configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 03:07:32.515: INFO: Waiting for pod pod-configmaps-ece316f2-3894-4833-aef4-da6ecc6bf4cd to disappear
Jun  1 03:07:32.517: INFO: Pod pod-configmaps-ece316f2-3894-4833-aef4-da6ecc6bf4cd no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 03:07:32.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8575" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":292,"completed":256,"skipped":4151,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
Jun  1 03:07:32.758: INFO: stderr: ""
Jun  1 03:07:32.758: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://127.0.0.1:37271\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://127.0.0.1:37271/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 03:07:32.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9178" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":292,"completed":257,"skipped":4155,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 16 lines ...
  test/e2e/framework/framework.go:175
Jun  1 03:07:47.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-8346" for this suite.
STEP: Destroying namespace "nsdeletetest-1674" for this suite.
Jun  1 03:07:47.868: INFO: Namespace nsdeletetest-1674 was already deleted
STEP: Destroying namespace "nsdeletetest-3485" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":292,"completed":258,"skipped":4162,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSS
------------------------------
[k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] 
  removing taint cancels eviction [Disruptive] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial]
... skipping 20 lines ...
STEP: Waiting some time to make sure that toleration time passed.
Jun  1 03:10:03.159: INFO: Pod wasn't evicted. Test successful
[AfterEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial]
  test/e2e/framework/framework.go:175
Jun  1 03:10:03.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "taint-single-pod-4637" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]","total":292,"completed":259,"skipped":4167,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 03:10:19.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-343" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":292,"completed":260,"skipped":4190,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 36 lines ...

W0601 03:10:20.673036   12159 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Jun  1 03:10:20.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3781" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":292,"completed":261,"skipped":4210,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  test/e2e/framework/framework.go:175
Jun  1 03:10:26.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3981" for this suite.
STEP: Destroying namespace "webhook-3981-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":292,"completed":262,"skipped":4222,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 03:10:30.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4432" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":263,"skipped":4240,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 03:10:30.473: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e7664648-82af-4e0f-9c47-e69db61d230d" in namespace "projected-2338" to be "Succeeded or Failed"
Jun  1 03:10:30.476: INFO: Pod "downwardapi-volume-e7664648-82af-4e0f-9c47-e69db61d230d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.775945ms
Jun  1 03:10:32.481: INFO: Pod "downwardapi-volume-e7664648-82af-4e0f-9c47-e69db61d230d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007647337s
STEP: Saw pod success
Jun  1 03:10:32.481: INFO: Pod "downwardapi-volume-e7664648-82af-4e0f-9c47-e69db61d230d" satisfied condition "Succeeded or Failed"
Jun  1 03:10:32.484: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-e7664648-82af-4e0f-9c47-e69db61d230d container client-container: <nil>
STEP: delete the pod
Jun  1 03:10:32.498: INFO: Waiting for pod downwardapi-volume-e7664648-82af-4e0f-9c47-e69db61d230d to disappear
Jun  1 03:10:32.500: INFO: Pod downwardapi-volume-e7664648-82af-4e0f-9c47-e69db61d230d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Jun  1 03:10:32.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2338" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":264,"skipped":4255,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}

------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Jun  1 03:10:36.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1051" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":292,"completed":265,"skipped":4255,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-1ad96ab6-8aa5-46c7-8379-8b1c892a7e33
STEP: Creating a pod to test consume configMaps
Jun  1 03:10:36.659: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4e8d8b5b-9741-4729-949f-42bbd47e7a56" in namespace "projected-8268" to be "Succeeded or Failed"
Jun  1 03:10:36.661: INFO: Pod "pod-projected-configmaps-4e8d8b5b-9741-4729-949f-42bbd47e7a56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.253303ms
Jun  1 03:10:38.665: INFO: Pod "pod-projected-configmaps-4e8d8b5b-9741-4729-949f-42bbd47e7a56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00596935s
STEP: Saw pod success
Jun  1 03:10:38.665: INFO: Pod "pod-projected-configmaps-4e8d8b5b-9741-4729-949f-42bbd47e7a56" satisfied condition "Succeeded or Failed"
Jun  1 03:10:38.669: INFO: Trying to get logs from node kind-worker pod pod-projected-configmaps-4e8d8b5b-9741-4729-949f-42bbd47e7a56 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 03:10:38.691: INFO: Waiting for pod pod-projected-configmaps-4e8d8b5b-9741-4729-949f-42bbd47e7a56 to disappear
Jun  1 03:10:38.695: INFO: Pod pod-projected-configmaps-4e8d8b5b-9741-4729-949f-42bbd47e7a56 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Jun  1 03:10:38.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8268" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":292,"completed":266,"skipped":4276,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 03:10:38.704: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jun  1 03:10:38.757: INFO: Waiting up to 5m0s for pod "pod-a5db53af-370b-4cf2-a688-57614c27f8db" in namespace "emptydir-6472" to be "Succeeded or Failed"
Jun  1 03:10:38.760: INFO: Pod "pod-a5db53af-370b-4cf2-a688-57614c27f8db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.690856ms
Jun  1 03:10:40.762: INFO: Pod "pod-a5db53af-370b-4cf2-a688-57614c27f8db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005446108s
STEP: Saw pod success
Jun  1 03:10:40.762: INFO: Pod "pod-a5db53af-370b-4cf2-a688-57614c27f8db" satisfied condition "Succeeded or Failed"
Jun  1 03:10:40.765: INFO: Trying to get logs from node kind-worker pod pod-a5db53af-370b-4cf2-a688-57614c27f8db container test-container: <nil>
STEP: delete the pod
Jun  1 03:10:40.780: INFO: Waiting for pod pod-a5db53af-370b-4cf2-a688-57614c27f8db to disappear
Jun  1 03:10:40.782: INFO: Pod pod-a5db53af-370b-4cf2-a688-57614c27f8db no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 03:10:40.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6472" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":267,"skipped":4283,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}

------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
Jun  1 03:10:46.010: INFO: stderr: ""
Jun  1 03:10:46.010: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7967-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t<string>\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t<string>\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t<Object>\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t<map[string]>\n     Specification of Waldo\n\n   status\t<Object>\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 03:10:48.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5550" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":292,"completed":268,"skipped":4283,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 03:10:49.005: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on node default medium
Jun  1 03:10:49.035: INFO: Waiting up to 5m0s for pod "pod-226b8cf4-4499-4a41-8d15-49aa65c705c1" in namespace "emptydir-4012" to be "Succeeded or Failed"
Jun  1 03:10:49.037: INFO: Pod "pod-226b8cf4-4499-4a41-8d15-49aa65c705c1": Phase="Pending", Reason="", readiness=false. Elapsed: 1.951407ms
Jun  1 03:10:51.041: INFO: Pod "pod-226b8cf4-4499-4a41-8d15-49aa65c705c1": Phase="Running", Reason="", readiness=true. Elapsed: 2.005517096s
Jun  1 03:10:53.044: INFO: Pod "pod-226b8cf4-4499-4a41-8d15-49aa65c705c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00896805s
STEP: Saw pod success
Jun  1 03:10:53.044: INFO: Pod "pod-226b8cf4-4499-4a41-8d15-49aa65c705c1" satisfied condition "Succeeded or Failed"
Jun  1 03:10:53.047: INFO: Trying to get logs from node kind-worker pod pod-226b8cf4-4499-4a41-8d15-49aa65c705c1 container test-container: <nil>
STEP: delete the pod
Jun  1 03:10:53.059: INFO: Waiting for pod pod-226b8cf4-4499-4a41-8d15-49aa65c705c1 to disappear
Jun  1 03:10:53.061: INFO: Pod pod-226b8cf4-4499-4a41-8d15-49aa65c705c1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 03:10:53.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4012" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":269,"skipped":4304,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-60046665-a0a5-4b57-b638-5c01a9530cae
STEP: Creating a pod to test consume configMaps
Jun  1 03:10:53.101: INFO: Waiting up to 5m0s for pod "pod-configmaps-f3698471-ade8-4cfa-b938-78b0b65713c0" in namespace "configmap-8086" to be "Succeeded or Failed"
Jun  1 03:10:53.104: INFO: Pod "pod-configmaps-f3698471-ade8-4cfa-b938-78b0b65713c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.648525ms
Jun  1 03:10:55.108: INFO: Pod "pod-configmaps-f3698471-ade8-4cfa-b938-78b0b65713c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006213193s
Jun  1 03:10:57.111: INFO: Pod "pod-configmaps-f3698471-ade8-4cfa-b938-78b0b65713c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009621128s
STEP: Saw pod success
Jun  1 03:10:57.111: INFO: Pod "pod-configmaps-f3698471-ade8-4cfa-b938-78b0b65713c0" satisfied condition "Succeeded or Failed"
Jun  1 03:10:57.116: INFO: Trying to get logs from node kind-worker pod pod-configmaps-f3698471-ade8-4cfa-b938-78b0b65713c0 container configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 03:10:57.135: INFO: Waiting for pod pod-configmaps-f3698471-ade8-4cfa-b938-78b0b65713c0 to disappear
Jun  1 03:10:57.139: INFO: Pod pod-configmaps-f3698471-ade8-4cfa-b938-78b0b65713c0 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 03:10:57.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8086" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":292,"completed":270,"skipped":4333,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 10 lines ...
Jun  1 03:11:09.441: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jun  1 03:11:12.413: INFO: >>> kubeConfig: /root/.kube/kind-test-config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 03:11:24.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8233" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":292,"completed":271,"skipped":4349,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 9 lines ...
STEP: creating pod
Jun  1 03:11:26.712: INFO: Pod pod-hostip-d4438649-206c-479b-aaf8-2703c30811c4 has hostIP: 172.18.0.4
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Jun  1 03:11:26.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6284" for this suite.
•{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":292,"completed":272,"skipped":4379,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  1 03:11:26.719: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name secret-emptykey-test-f3a77698-68aa-4675-a57a-682416899bf1
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
Jun  1 03:11:26.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4220" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":292,"completed":273,"skipped":4383,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  1 03:11:26.750: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
Jun  1 03:13:26.790: INFO: Deleting pod "var-expansion-a2135ed0-5c2d-4713-a5fb-eb84d84998fd" in namespace "var-expansion-1620"
Jun  1 03:13:26.796: INFO: Wait up to 5m0s for pod "var-expansion-a2135ed0-5c2d-4713-a5fb-eb84d84998fd" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Jun  1 03:13:30.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1620" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":292,"completed":274,"skipped":4403,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-downwardapi-z8dv
STEP: Creating a pod to test atomic-volume-subpath
Jun  1 03:13:30.850: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-z8dv" in namespace "subpath-783" to be "Succeeded or Failed"
Jun  1 03:13:30.853: INFO: Pod "pod-subpath-test-downwardapi-z8dv": Phase="Pending", Reason="", readiness=false. Elapsed: 3.185661ms
Jun  1 03:13:32.857: INFO: Pod "pod-subpath-test-downwardapi-z8dv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007001496s
Jun  1 03:13:34.861: INFO: Pod "pod-subpath-test-downwardapi-z8dv": Phase="Running", Reason="", readiness=true. Elapsed: 4.010895641s
Jun  1 03:13:36.864: INFO: Pod "pod-subpath-test-downwardapi-z8dv": Phase="Running", Reason="", readiness=true. Elapsed: 6.014525688s
Jun  1 03:13:38.868: INFO: Pod "pod-subpath-test-downwardapi-z8dv": Phase="Running", Reason="", readiness=true. Elapsed: 8.018596295s
Jun  1 03:13:40.873: INFO: Pod "pod-subpath-test-downwardapi-z8dv": Phase="Running", Reason="", readiness=true. Elapsed: 10.022874095s
... skipping 2 lines ...
Jun  1 03:13:46.884: INFO: Pod "pod-subpath-test-downwardapi-z8dv": Phase="Running", Reason="", readiness=true. Elapsed: 16.033965144s
Jun  1 03:13:48.889: INFO: Pod "pod-subpath-test-downwardapi-z8dv": Phase="Running", Reason="", readiness=true. Elapsed: 18.039576418s
Jun  1 03:13:50.893: INFO: Pod "pod-subpath-test-downwardapi-z8dv": Phase="Running", Reason="", readiness=true. Elapsed: 20.043391479s
Jun  1 03:13:52.897: INFO: Pod "pod-subpath-test-downwardapi-z8dv": Phase="Running", Reason="", readiness=true. Elapsed: 22.047069174s
Jun  1 03:13:54.901: INFO: Pod "pod-subpath-test-downwardapi-z8dv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.050928009s
STEP: Saw pod success
Jun  1 03:13:54.901: INFO: Pod "pod-subpath-test-downwardapi-z8dv" satisfied condition "Succeeded or Failed"
Jun  1 03:13:54.903: INFO: Trying to get logs from node kind-worker pod pod-subpath-test-downwardapi-z8dv container test-container-subpath-downwardapi-z8dv: <nil>
STEP: delete the pod
Jun  1 03:13:54.934: INFO: Waiting for pod pod-subpath-test-downwardapi-z8dv to disappear
Jun  1 03:13:54.937: INFO: Pod pod-subpath-test-downwardapi-z8dv no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-z8dv
Jun  1 03:13:54.937: INFO: Deleting pod "pod-subpath-test-downwardapi-z8dv" in namespace "subpath-783"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Jun  1 03:13:54.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-783" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":292,"completed":275,"skipped":4418,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-64308ef5-46ec-40a5-bf0f-40264b275db2
STEP: Creating a pod to test consume secrets
Jun  1 03:13:54.981: INFO: Waiting up to 5m0s for pod "pod-secrets-4426cd02-2fec-455c-83fc-f549722fc926" in namespace "secrets-2994" to be "Succeeded or Failed"
Jun  1 03:13:54.984: INFO: Pod "pod-secrets-4426cd02-2fec-455c-83fc-f549722fc926": Phase="Pending", Reason="", readiness=false. Elapsed: 2.576806ms
Jun  1 03:13:56.988: INFO: Pod "pod-secrets-4426cd02-2fec-455c-83fc-f549722fc926": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00648852s
STEP: Saw pod success
Jun  1 03:13:56.988: INFO: Pod "pod-secrets-4426cd02-2fec-455c-83fc-f549722fc926" satisfied condition "Succeeded or Failed"
Jun  1 03:13:56.990: INFO: Trying to get logs from node kind-worker pod pod-secrets-4426cd02-2fec-455c-83fc-f549722fc926 container secret-volume-test: <nil>
STEP: delete the pod
Jun  1 03:13:57.003: INFO: Waiting for pod pod-secrets-4426cd02-2fec-455c-83fc-f549722fc926 to disappear
Jun  1 03:13:57.006: INFO: Pod pod-secrets-4426cd02-2fec-455c-83fc-f549722fc926 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Jun  1 03:13:57.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2994" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":276,"skipped":4435,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 9 lines ...
STEP: Creating the pod
Jun  1 03:13:59.573: INFO: Successfully updated pod "labelsupdate6f4adbc3-0bb2-4382-966b-a7c33a84d32a"
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Jun  1 03:14:01.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9701" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":292,"completed":277,"skipped":4531,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 15 lines ...
STEP: deleting the pod gracefully
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Jun  1 03:14:07.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6429" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":292,"completed":278,"skipped":4538,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 27 lines ...
  test/e2e/framework/framework.go:175
Jun  1 03:14:14.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6532" for this suite.
STEP: Destroying namespace "webhook-6532-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":292,"completed":279,"skipped":4572,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]}
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 85 lines ...
Jun  1 03:15:17.235: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Jun  1 03:15:17.235: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jun  1 03:15:17.235: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jun  1 03:15:17.235: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-5148 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 03:15:17.565: INFO: rc: 1
Jun  1 03:15:17.565: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-5148 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "f37fcf91f19cbfcdf8bab87063e0678bcb5b90cc2a59313b90c12bc0db95a6ba": OCI runtime exec failed: exec failed: container_linux.go:353: starting container process caused: process_linux.go:99: executing setns process caused: exit status 1: unknown

error:
exit status 1
Jun  1 03:15:27.566: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-5148 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 03:15:27.785: INFO: rc: 1
Jun  1 03:15:27.786: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-5148 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jun  1 03:15:37.786: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-5148 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 03:15:37.970: INFO: rc: 1
Jun  1 03:15:37.970: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-5148 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 03:15:47.971: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-5148 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 03:15:48.174: INFO: rc: 1
Jun  1 03:15:48.174: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-5148 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 03:15:58.174: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-5148 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 03:15:58.388: INFO: rc: 1
Jun  1 03:15:58.388: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-5148 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 03:16:08.389: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-5148 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 03:16:08.600: INFO: rc: 1
Jun  1 03:16:08.600: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-5148 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 03:16:18.600: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-5148 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 03:16:18.793: INFO: rc: 1
Jun  1 03:16:18.793: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-5148 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 03:16:28.793: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-5148 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 03:16:28.990: INFO: rc: 1
Jun  1 03:16:28.991: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-5148 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 03:16:38.991: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-5148 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 03:16:39.161: INFO: rc: 1
Jun  1 03:16:39.161: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-5148 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 03:16:49.161: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-5148 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 03:16:49.416: INFO: rc: 1
Jun  1 03:16:49.416: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-5148 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 03:16:59.416: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-5148 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 03:16:59.623: INFO: rc: 1
Jun  1 03:16:59.623: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-5148 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 03:17:09.623: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-5148 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 03:17:09.844: INFO: rc: 1
Jun  1 03:17:09.844: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-5148 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 03:17:19.844: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-5148 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 03:17:20.041: INFO: rc: 1
Jun  1 03:17:20.042: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-5148 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jun  1 03:17:30.042: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-5148 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 03:17:30.249: INFO: rc: 1
Jun  1 03:17:30.249: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-5148 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
{"component":"entrypoint","file":"prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","time":"2020-06-01T03:17:33Z"}
Jun  1 03:17:40.249: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-5148 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jun  1 03:17:40.454: INFO: rc: 1
Jun  1 03:17:40.454: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:37271 --kubeconfig=/root/.kube/kind-test-config exec --namespace=statefulset-5148 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
{"component":"entrypoint","file":"prow/entrypoint/run.go:245","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15s grace period","time":"2020-06-01T03:17:48Z"}