This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2020-06-01 14:35
Elapsed2h0m
Revisionmaster
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/0c41e7b4-cfc1-47b5-b42f-53df066bd796/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/0c41e7b4-cfc1-47b5-b42f-53df066bd796/targets/test

No Test Failures!


Error lines from build-log.txt

... skipping 69 lines ...
Analyzing: 4 targets (20 packages loaded, 27 targets configured)
Analyzing: 4 targets (641 packages loaded, 8136 targets configured)
Analyzing: 4 targets (1936 packages loaded, 14023 targets configured)
Analyzing: 4 targets (2268 packages loaded, 15447 targets configured)
Analyzing: 4 targets (2268 packages loaded, 15447 targets configured)
Analyzing: 4 targets (2269 packages loaded, 15447 targets configured)
DEBUG: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/bazel_gazelle/internal/go_repository.bzl:184:13: org_golang_x_tools: gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go:2:43: expected 'package', found 'EOF'
gazelle: found packages complexnums (complexnums.go) and conversions (conversions.go) in /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/internal/gccgoimporter/testdata
gazelle: found packages b (b.go) and exports (exports.go) in /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/internal/gcimporter/testdata
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go:1:34: expected 'package', found 'EOF'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go:1:16: expected ';', found '.'
gazelle: finding module path for import domain.name/importdecl: exit status 1: can't load package: package domain.name/importdecl: cannot find module providing package domain.name/importdecl
gazelle: finding module path for import old.com/one: exit status 1: can't load package: package old.com/one: cannot find module providing package old.com/one
gazelle: finding module path for import titanic.biz/bar: exit status 1: can't load package: package titanic.biz/bar: cannot find module providing package titanic.biz/bar
gazelle: finding module path for import titanic.biz/foo: exit status 1: can't load package: package titanic.biz/foo: cannot find module providing package titanic.biz/foo
gazelle: finding module path for import fruit.io/pear: exit status 1: can't load package: package fruit.io/pear: cannot find module providing package fruit.io/pear
gazelle: finding module path for import fruit.io/banana: exit status 1: can't load package: package fruit.io/banana: cannot find module providing package fruit.io/banana
... skipping 154 lines ...
WARNING: Waiting for server process to terminate (waited 10 seconds, waiting at most 60)
WARNING: Waiting for server process to terminate (waited 30 seconds, waiting at most 60)
INFO: Waited 60 seconds for server process (pid=5874) to terminate.
WARNING: Waiting for server process to terminate (waited 5 seconds, waiting at most 10)
WARNING: Waiting for server process to terminate (waited 10 seconds, waiting at most 10)
INFO: Waited 10 seconds for server process (pid=5874) to terminate.
FATAL: Attempted to kill stale server process (pid=5874) using SIGKILL, but it did not die in a timely fashion.
+ true
+ pkill ^bazel
+ true
+ mkdir -p _output/bin/
+ cp bazel-bin/test/e2e/e2e.test _output/bin/
+ find /home/prow/go/src/k8s.io/kubernetes/bazel-bin/ -name kubectl -type f
... skipping 46 lines ...
localAPIEndpoint:
  advertiseAddress: 172.18.0.3
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.3
---
apiVersion: kubeadm.k8s.io/v1beta2
discovery:
  bootstrapToken:
    apiServerEndpoint: kind-control-plane:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.3
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
... skipping 32 lines ...
localAPIEndpoint:
  advertiseAddress: 172.18.0.4
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.4
---
apiVersion: kubeadm.k8s.io/v1beta2
discovery:
  bootstrapToken:
    apiServerEndpoint: kind-control-plane:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.4
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
... skipping 32 lines ...
localAPIEndpoint:
  advertiseAddress: 172.18.0.2
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.2
---
apiVersion: kubeadm.k8s.io/v1beta2
controlPlane:
  localAPIEndpoint:
    advertiseAddress: 172.18.0.2
... skipping 4 lines ...
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.2
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
... skipping 37 lines ...
I0601 14:43:07.566295     220 checks.go:376] validating the presence of executable ebtables
I0601 14:43:07.566329     220 checks.go:376] validating the presence of executable ethtool
I0601 14:43:07.566361     220 checks.go:376] validating the presence of executable socat
I0601 14:43:07.566413     220 checks.go:376] validating the presence of executable tc
I0601 14:43:07.566445     220 checks.go:376] validating the presence of executable touch
I0601 14:43:07.566495     220 checks.go:520] running all checks
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0601 14:43:07.575369     220 checks.go:406] checking whether the given node name is reachable using net.LookupHost
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
... skipping 96 lines ...
I0601 14:43:21.672336     220 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 20 milliseconds
I0601 14:43:22.155470     220 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 16 milliseconds
I0601 14:43:22.652562     220 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 13 milliseconds
I0601 14:43:23.147694     220 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 8 milliseconds
I0601 14:43:23.653703     220 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 12 milliseconds
I0601 14:43:24.150716     220 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 11 milliseconds
I0601 14:43:34.458165     220 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 9818 milliseconds
I0601 14:43:34.641681     220 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 2 milliseconds
I0601 14:43:35.143926     220 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 4 milliseconds
I0601 14:43:35.641563     220 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 2 milliseconds
I0601 14:43:36.143574     220 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 200 OK in 4 milliseconds
I0601 14:43:36.143686     220 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
[apiclient] All control plane components are healthy after 22.527635 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0601 14:43:36.150088     220 round_trippers.go:443] POST https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 4 milliseconds
I0601 14:43:36.156215     220 round_trippers.go:443] POST https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 5 milliseconds
... skipping 109 lines ...
I0601 14:43:48.299015     588 checks.go:376] validating the presence of executable ebtables
I0601 14:43:48.299046     588 checks.go:376] validating the presence of executable ethtool
I0601 14:43:48.299062     588 checks.go:376] validating the presence of executable socat
I0601 14:43:48.299154     588 checks.go:376] validating the presence of executable tc
I0601 14:43:48.299178     588 checks.go:376] validating the presence of executable touch
I0601 14:43:48.299206     588 checks.go:520] running all checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_PIDS: enabled
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0601 14:43:48.328810     588 checks.go:406] checking whether the given node name is reachable using net.LookupHost
I0601 14:43:48.346441     588 checks.go:618] validating kubelet version
I0601 14:43:48.609728     588 checks.go:128] validating if the "kubelet" service is enabled and active
I0601 14:43:48.631069     588 checks.go:201] validating availability of port 10250
I0601 14:43:48.631288     588 checks.go:286] validating the existence of file /etc/kubernetes/pki/ca.crt
I0601 14:43:48.631320     588 checks.go:432] validating if the connectivity type is via proxy or direct
... skipping 71 lines ...
I0601 14:43:48.303866     591 checks.go:376] validating the presence of executable ebtables
I0601 14:43:48.303916     591 checks.go:376] validating the presence of executable ethtool
I0601 14:43:48.303939     591 checks.go:376] validating the presence of executable socat
I0601 14:43:48.303975     591 checks.go:376] validating the presence of executable tc
I0601 14:43:48.304004     591 checks.go:376] validating the presence of executable touch
I0601 14:43:48.304048     591 checks.go:520] running all checks
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0601 14:43:48.314153     591 checks.go:406] checking whether the given node name is reachable using net.LookupHost
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
... skipping 82 lines ...
+ GINKGO_PID=11322
+ wait 11322
+ ./hack/ginkgo-e2e.sh --provider=skeleton --num-nodes=2 --ginkgo.focus=\[Conformance\] --ginkgo.skip= --report-dir=/logs/artifacts --disable-log-dump=true
Conformance test: not doing test setup.
I0601 14:44:30.289927   11956 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
I0601 14:44:30.290097   11956 e2e.go:129] Starting e2e run "c5ddb8ac-f47a-4b76-8a94-ef9573d51cfb" on Ginkgo node 1
{"msg":"Test Suite starting","total":292,"completed":0,"skipped":0,"failed":0}
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1591022668 - Will randomize all specs
Will run 292 of 5101 specs

Jun  1 14:44:30.350: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jun  1 14:44:30.354: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Jun  1 14:44:30.369: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Jun  1 14:44:30.401: INFO: The status of Pod coredns-66bff467f8-7vh72 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun  1 14:44:30.401: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Jun  1 14:44:30.401: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
Jun  1 14:44:30.401: INFO: POD                       NODE          PHASE    GRACE  CONDITIONS
Jun  1 14:44:30.401: INFO: coredns-66bff467f8-7vh72  kind-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:44:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:44:22 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:44:22 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-01 14:44:22 +0000 UTC  }]
Jun  1 14:44:30.401: INFO: 
Jun  1 14:44:32.420: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (2 seconds elapsed)
... skipping 62 lines ...
Jun  1 14:44:37.213: INFO: stderr: ""
Jun  1 14:44:37.213: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 14:44:37.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6346" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":292,"completed":1,"skipped":34,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  1 14:44:37.223: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:162
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating the pod
Jun  1 14:44:37.250: INFO: PodSpec: initContainers in spec.initContainers
Jun  1 14:45:27.618: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-67906a5b-5cc7-454b-a3c0-9eab1d1d35ed", GenerateName:"", Namespace:"init-container-4545", SelfLink:"/api/v1/namespaces/init-container-4545/pods/pod-init-67906a5b-5cc7-454b-a3c0-9eab1d1d35ed", UID:"889009af-af7e-4552-afc9-0485df1471de", ResourceVersion:"922", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63726619477, loc:(*time.Location)(0x8006d20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"250551173"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002574f80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002574fa0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002574fc0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002574fe0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-mdfwr", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0015ade40), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-mdfwr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-mdfwr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-mdfwr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00223de68), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kind-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001fddc70), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00223def0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00223df10)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00223df18), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00223df1c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726619477, loc:(*time.Location)(0x8006d20)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726619477, loc:(*time.Location)(0x8006d20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726619477, loc:(*time.Location)(0x8006d20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726619477, loc:(*time.Location)(0x8006d20)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.3", PodIP:"10.244.2.3", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.3"}}, StartTime:(*v1.Time)(0xc002575000), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001fddd50)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001fdddc0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://5d5f3f2f06772848666365f5d6dc03e230f192242dac18bf77575c0848edde84", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002575040), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002575020), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc00223df9f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
Jun  1 14:45:27.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4545" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":292,"completed":2,"skipped":42,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
... skipping 9 lines ...
STEP: creating the pod
Jun  1 14:45:27.659: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
Jun  1 14:45:35.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5722" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":292,"completed":3,"skipped":58,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 44 lines ...
Jun  1 14:45:46.335: INFO: stdout: "service/rm3 exposed\n"
Jun  1 14:45:46.339: INFO: Service rm3 in namespace kubectl-229 found.
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 14:45:48.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-229" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":292,"completed":4,"skipped":60,"failed":0}
SSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-6572c8e8-4c85-47da-a5e3-9ee155ff65d5
STEP: Creating a pod to test consume secrets
Jun  1 14:45:48.406: INFO: Waiting up to 5m0s for pod "pod-secrets-4271fabb-1d23-43d5-aa6b-4d426bbcf5b7" in namespace "secrets-3401" to be "Succeeded or Failed"
Jun  1 14:45:48.409: INFO: Pod "pod-secrets-4271fabb-1d23-43d5-aa6b-4d426bbcf5b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.990749ms
Jun  1 14:45:50.413: INFO: Pod "pod-secrets-4271fabb-1d23-43d5-aa6b-4d426bbcf5b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00737888s
Jun  1 14:45:52.419: INFO: Pod "pod-secrets-4271fabb-1d23-43d5-aa6b-4d426bbcf5b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013264838s
STEP: Saw pod success
Jun  1 14:45:52.419: INFO: Pod "pod-secrets-4271fabb-1d23-43d5-aa6b-4d426bbcf5b7" satisfied condition "Succeeded or Failed"
Jun  1 14:45:52.423: INFO: Trying to get logs from node kind-worker pod pod-secrets-4271fabb-1d23-43d5-aa6b-4d426bbcf5b7 container secret-volume-test: <nil>
STEP: delete the pod
Jun  1 14:45:52.436: INFO: Waiting for pod pod-secrets-4271fabb-1d23-43d5-aa6b-4d426bbcf5b7 to disappear
Jun  1 14:45:52.439: INFO: Pod pod-secrets-4271fabb-1d23-43d5-aa6b-4d426bbcf5b7 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Jun  1 14:45:52.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3401" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":292,"completed":5,"skipped":66,"failed":0}
SSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 12 lines ...
STEP: Creating configMap with name cm-test-opt-create-c3b44d8a-4ce9-412f-acb5-ffcb1de3551a
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Jun  1 14:46:00.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-727" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":6,"skipped":69,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 12 lines ...
STEP: Creating secret with name s-test-opt-create-989b96dc-ba23-41b2-b29a-761ce16cf356
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Jun  1 14:47:23.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8191" for this suite.
•{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":7,"skipped":76,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 14:47:31.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-5245" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/crd_conversion_webhook.go:137
•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":292,"completed":8,"skipped":88,"failed":0}

------------------------------
[sig-network] Services 
  should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 67 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 14:48:16.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7165" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":292,"completed":9,"skipped":88,"failed":0}
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-map-fc151754-e69f-48c5-8407-b773f3251d18
STEP: Creating a pod to test consume secrets
Jun  1 14:48:16.514: INFO: Waiting up to 5m0s for pod "pod-secrets-915f2c53-84d0-41f0-81c2-0cb11e7fb62c" in namespace "secrets-214" to be "Succeeded or Failed"
Jun  1 14:48:16.521: INFO: Pod "pod-secrets-915f2c53-84d0-41f0-81c2-0cb11e7fb62c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.075909ms
Jun  1 14:48:18.526: INFO: Pod "pod-secrets-915f2c53-84d0-41f0-81c2-0cb11e7fb62c": Phase="Running", Reason="", readiness=true. Elapsed: 2.01151298s
Jun  1 14:48:20.532: INFO: Pod "pod-secrets-915f2c53-84d0-41f0-81c2-0cb11e7fb62c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017449309s
STEP: Saw pod success
Jun  1 14:48:20.532: INFO: Pod "pod-secrets-915f2c53-84d0-41f0-81c2-0cb11e7fb62c" satisfied condition "Succeeded or Failed"
Jun  1 14:48:20.535: INFO: Trying to get logs from node kind-worker pod pod-secrets-915f2c53-84d0-41f0-81c2-0cb11e7fb62c container secret-volume-test: <nil>
STEP: delete the pod
Jun  1 14:48:20.559: INFO: Waiting for pod pod-secrets-915f2c53-84d0-41f0-81c2-0cb11e7fb62c to disappear
Jun  1 14:48:20.562: INFO: Pod pod-secrets-915f2c53-84d0-41f0-81c2-0cb11e7fb62c no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Jun  1 14:48:20.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-214" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":10,"skipped":92,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 14:48:20.604: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7b865869-ee1b-40af-9c88-0561bf48dd10" in namespace "projected-7181" to be "Succeeded or Failed"
Jun  1 14:48:20.607: INFO: Pod "downwardapi-volume-7b865869-ee1b-40af-9c88-0561bf48dd10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.986811ms
Jun  1 14:48:22.612: INFO: Pod "downwardapi-volume-7b865869-ee1b-40af-9c88-0561bf48dd10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008395208s
Jun  1 14:48:24.616: INFO: Pod "downwardapi-volume-7b865869-ee1b-40af-9c88-0561bf48dd10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012041432s
STEP: Saw pod success
Jun  1 14:48:24.616: INFO: Pod "downwardapi-volume-7b865869-ee1b-40af-9c88-0561bf48dd10" satisfied condition "Succeeded or Failed"
Jun  1 14:48:24.619: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-7b865869-ee1b-40af-9c88-0561bf48dd10 container client-container: <nil>
STEP: delete the pod
Jun  1 14:48:24.637: INFO: Waiting for pod downwardapi-volume-7b865869-ee1b-40af-9c88-0561bf48dd10 to disappear
Jun  1 14:48:24.639: INFO: Pod downwardapi-volume-7b865869-ee1b-40af-9c88-0561bf48dd10 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Jun  1 14:48:24.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7181" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":292,"completed":11,"skipped":148,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Jun  1 14:48:24.646: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Jun  1 14:48:24.678: INFO: Waiting up to 5m0s for pod "downward-api-30357c68-2baa-422c-a958-4fc9607ceca0" in namespace "downward-api-2995" to be "Succeeded or Failed"
Jun  1 14:48:24.680: INFO: Pod "downward-api-30357c68-2baa-422c-a958-4fc9607ceca0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.278445ms
Jun  1 14:48:26.685: INFO: Pod "downward-api-30357c68-2baa-422c-a958-4fc9607ceca0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007511693s
Jun  1 14:48:28.690: INFO: Pod "downward-api-30357c68-2baa-422c-a958-4fc9607ceca0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011949648s
STEP: Saw pod success
Jun  1 14:48:28.690: INFO: Pod "downward-api-30357c68-2baa-422c-a958-4fc9607ceca0" satisfied condition "Succeeded or Failed"
Jun  1 14:48:28.693: INFO: Trying to get logs from node kind-worker pod downward-api-30357c68-2baa-422c-a958-4fc9607ceca0 container dapi-container: <nil>
STEP: delete the pod
Jun  1 14:48:28.710: INFO: Waiting for pod downward-api-30357c68-2baa-422c-a958-4fc9607ceca0 to disappear
Jun  1 14:48:28.714: INFO: Pod downward-api-30357c68-2baa-422c-a958-4fc9607ceca0 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Jun  1 14:48:28.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2995" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":292,"completed":12,"skipped":180,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 14:48:28.753: INFO: Waiting up to 5m0s for pod "downwardapi-volume-60313e45-7bb3-4e7c-ad56-39242a963781" in namespace "projected-3700" to be "Succeeded or Failed"
Jun  1 14:48:28.756: INFO: Pod "downwardapi-volume-60313e45-7bb3-4e7c-ad56-39242a963781": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043472ms
Jun  1 14:48:30.760: INFO: Pod "downwardapi-volume-60313e45-7bb3-4e7c-ad56-39242a963781": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006604969s
Jun  1 14:48:32.766: INFO: Pod "downwardapi-volume-60313e45-7bb3-4e7c-ad56-39242a963781": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012151468s
STEP: Saw pod success
Jun  1 14:48:32.766: INFO: Pod "downwardapi-volume-60313e45-7bb3-4e7c-ad56-39242a963781" satisfied condition "Succeeded or Failed"
Jun  1 14:48:32.768: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-60313e45-7bb3-4e7c-ad56-39242a963781 container client-container: <nil>
STEP: delete the pod
Jun  1 14:48:32.783: INFO: Waiting for pod downwardapi-volume-60313e45-7bb3-4e7c-ad56-39242a963781 to disappear
Jun  1 14:48:32.786: INFO: Pod downwardapi-volume-60313e45-7bb3-4e7c-ad56-39242a963781 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Jun  1 14:48:32.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3700" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":292,"completed":13,"skipped":203,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
STEP: Updating configmap configmap-test-upd-f475200e-42f4-49f1-a1d9-765ae4be83ee
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 14:49:47.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3881" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":14,"skipped":254,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  1 14:49:47.151: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name secret-emptykey-test-dddae72d-2612-4254-b33f-1cec37a9881c
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
Jun  1 14:49:47.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5499" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":292,"completed":15,"skipped":263,"failed":0}

------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 22 lines ...
Jun  1 14:49:57.260: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-817 /api/v1/namespaces/watch-817/configmaps/e2e-watch-test-label-changed c23b8985-51e4-4194-bbdf-1cfac69399c9 2282 0 2020-06-01 14:49:47 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-06-01 14:49:57 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
Jun  1 14:49:57.260: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-817 /api/v1/namespaces/watch-817/configmaps/e2e-watch-test-label-changed c23b8985-51e4-4194-bbdf-1cfac69399c9 2283 0 2020-06-01 14:49:47 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-06-01 14:49:57 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Jun  1 14:49:57.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-817" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":292,"completed":16,"skipped":263,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Jun  1 14:49:57.305: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-4e8d82ff-1ee7-4feb-8495-dca32ab3406b" in namespace "security-context-test-4454" to be "Succeeded or Failed"
Jun  1 14:49:57.308: INFO: Pod "busybox-privileged-false-4e8d82ff-1ee7-4feb-8495-dca32ab3406b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.230578ms
Jun  1 14:49:59.312: INFO: Pod "busybox-privileged-false-4e8d82ff-1ee7-4feb-8495-dca32ab3406b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006456917s
Jun  1 14:50:01.316: INFO: Pod "busybox-privileged-false-4e8d82ff-1ee7-4feb-8495-dca32ab3406b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010746253s
Jun  1 14:50:01.317: INFO: Pod "busybox-privileged-false-4e8d82ff-1ee7-4feb-8495-dca32ab3406b" satisfied condition "Succeeded or Failed"
Jun  1 14:50:01.324: INFO: Got logs for pod "busybox-privileged-false-4e8d82ff-1ee7-4feb-8495-dca32ab3406b": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Jun  1 14:50:01.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-4454" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":17,"skipped":276,"failed":0}
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 12 lines ...
STEP: Creating configMap with name cm-test-opt-create-6a0ab088-c001-4ec7-aa57-12f4289133cb
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 14:50:09.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1304" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":18,"skipped":282,"failed":0}

------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 14:50:09.471: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on node default medium
Jun  1 14:50:09.509: INFO: Waiting up to 5m0s for pod "pod-f912866a-977d-4bb7-8372-012f6c920e52" in namespace "emptydir-5965" to be "Succeeded or Failed"
Jun  1 14:50:09.512: INFO: Pod "pod-f912866a-977d-4bb7-8372-012f6c920e52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.7303ms
Jun  1 14:50:11.517: INFO: Pod "pod-f912866a-977d-4bb7-8372-012f6c920e52": Phase="Running", Reason="", readiness=true. Elapsed: 2.008139047s
Jun  1 14:50:13.522: INFO: Pod "pod-f912866a-977d-4bb7-8372-012f6c920e52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012940285s
STEP: Saw pod success
Jun  1 14:50:13.522: INFO: Pod "pod-f912866a-977d-4bb7-8372-012f6c920e52" satisfied condition "Succeeded or Failed"
Jun  1 14:50:13.525: INFO: Trying to get logs from node kind-worker2 pod pod-f912866a-977d-4bb7-8372-012f6c920e52 container test-container: <nil>
STEP: delete the pod
Jun  1 14:50:13.556: INFO: Waiting for pod pod-f912866a-977d-4bb7-8372-012f6c920e52 to disappear
Jun  1 14:50:13.558: INFO: Pod pod-f912866a-977d-4bb7-8372-012f6c920e52 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 14:50:13.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5965" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":19,"skipped":282,"failed":0}
SSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should patch a Namespace [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 9 lines ...
STEP: get the Namespace and ensuring it has the label
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  test/e2e/framework/framework.go:175
Jun  1 14:50:13.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-6635" for this suite.
STEP: Destroying namespace "nspatchtest-365e3f33-8642-4473-87b3-e92dfc548e2b-2427" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":292,"completed":20,"skipped":285,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 16 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Jun  1 14:50:26.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8016" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":292,"completed":21,"skipped":343,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-6e256abf-3b81-4a9e-bcc0-a69959052249
STEP: Creating a pod to test consume configMaps
Jun  1 14:50:26.774: INFO: Waiting up to 5m0s for pod "pod-configmaps-15ed7f10-0431-4732-992d-8ffb69a9fe05" in namespace "configmap-6401" to be "Succeeded or Failed"
Jun  1 14:50:26.776: INFO: Pod "pod-configmaps-15ed7f10-0431-4732-992d-8ffb69a9fe05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.226651ms
Jun  1 14:50:28.781: INFO: Pod "pod-configmaps-15ed7f10-0431-4732-992d-8ffb69a9fe05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007542262s
STEP: Saw pod success
Jun  1 14:50:28.781: INFO: Pod "pod-configmaps-15ed7f10-0431-4732-992d-8ffb69a9fe05" satisfied condition "Succeeded or Failed"
Jun  1 14:50:28.784: INFO: Trying to get logs from node kind-worker pod pod-configmaps-15ed7f10-0431-4732-992d-8ffb69a9fe05 container configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 14:50:28.803: INFO: Waiting for pod pod-configmaps-15ed7f10-0431-4732-992d-8ffb69a9fe05 to disappear
Jun  1 14:50:28.807: INFO: Pod pod-configmaps-15ed7f10-0431-4732-992d-8ffb69a9fe05 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 14:50:28.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6401" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":22,"skipped":351,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 27 lines ...
Jun  1 14:50:49.056: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jun  1 14:50:49.229: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
Jun  1 14:50:49.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-1679" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":292,"completed":23,"skipped":364,"failed":0}

------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 14 lines ...
Jun  1 14:50:54.286: INFO: Trying to dial the pod
Jun  1 14:50:59.296: INFO: Controller my-hostname-basic-f07f260f-d408-494d-a6ca-c32d8ce04f98: Got expected result from replica 1 [my-hostname-basic-f07f260f-d408-494d-a6ca-c32d8ce04f98-qp795]: "my-hostname-basic-f07f260f-d408-494d-a6ca-c32d8ce04f98-qp795", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
Jun  1 14:50:59.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2927" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":292,"completed":24,"skipped":364,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 160 lines ...
Jun  1 14:51:51.216: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7003/pods","resourceVersion":"2959"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Jun  1 14:51:51.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7003" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":292,"completed":25,"skipped":373,"failed":0}
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 14:51:51.232: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jun  1 14:51:51.278: INFO: Waiting up to 5m0s for pod "pod-45f80c6a-fcf4-4196-bd96-1b657a42674f" in namespace "emptydir-4046" to be "Succeeded or Failed"
Jun  1 14:51:51.288: INFO: Pod "pod-45f80c6a-fcf4-4196-bd96-1b657a42674f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.403455ms
Jun  1 14:51:53.293: INFO: Pod "pod-45f80c6a-fcf4-4196-bd96-1b657a42674f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015570184s
Jun  1 14:51:55.297: INFO: Pod "pod-45f80c6a-fcf4-4196-bd96-1b657a42674f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019272533s
STEP: Saw pod success
Jun  1 14:51:55.297: INFO: Pod "pod-45f80c6a-fcf4-4196-bd96-1b657a42674f" satisfied condition "Succeeded or Failed"
Jun  1 14:51:55.300: INFO: Trying to get logs from node kind-worker pod pod-45f80c6a-fcf4-4196-bd96-1b657a42674f container test-container: <nil>
STEP: delete the pod
Jun  1 14:51:55.316: INFO: Waiting for pod pod-45f80c6a-fcf4-4196-bd96-1b657a42674f to disappear
Jun  1 14:51:55.319: INFO: Pod pod-45f80c6a-fcf4-4196-bd96-1b657a42674f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 14:51:55.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4046" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":26,"skipped":377,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 14:51:55.363: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b8dee057-f747-4909-85d8-143d87459804" in namespace "projected-8065" to be "Succeeded or Failed"
Jun  1 14:51:55.366: INFO: Pod "downwardapi-volume-b8dee057-f747-4909-85d8-143d87459804": Phase="Pending", Reason="", readiness=false. Elapsed: 2.933136ms
Jun  1 14:51:57.370: INFO: Pod "downwardapi-volume-b8dee057-f747-4909-85d8-143d87459804": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007729109s
Jun  1 14:51:59.375: INFO: Pod "downwardapi-volume-b8dee057-f747-4909-85d8-143d87459804": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012613616s
STEP: Saw pod success
Jun  1 14:51:59.375: INFO: Pod "downwardapi-volume-b8dee057-f747-4909-85d8-143d87459804" satisfied condition "Succeeded or Failed"
Jun  1 14:51:59.379: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-b8dee057-f747-4909-85d8-143d87459804 container client-container: <nil>
STEP: delete the pod
Jun  1 14:51:59.394: INFO: Waiting for pod downwardapi-volume-b8dee057-f747-4909-85d8-143d87459804 to disappear
Jun  1 14:51:59.396: INFO: Pod downwardapi-volume-b8dee057-f747-4909-85d8-143d87459804 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Jun  1 14:51:59.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8065" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":292,"completed":27,"skipped":385,"failed":0}
SS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 9 lines ...
STEP: Updating configmap projected-configmap-test-upd-dbbec103-f7ca-4cc5-b97b-65804edeb991
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Jun  1 14:53:31.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3081" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":28,"skipped":387,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Jun  1 14:53:43.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4914" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":292,"completed":29,"skipped":404,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 26 lines ...
  test/e2e/framework/framework.go:175
Jun  1 14:53:49.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5375" for this suite.
STEP: Destroying namespace "webhook-5375-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":292,"completed":30,"skipped":409,"failed":0}
SSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 44 lines ...
Jun  1 14:54:10.616: INFO: Pod "test-rollover-deployment-7c4fd9c879-xd5gp" is available:
&Pod{ObjectMeta:{test-rollover-deployment-7c4fd9c879-xd5gp test-rollover-deployment-7c4fd9c879- deployment-4489 /api/v1/namespaces/deployment-4489/pods/test-rollover-deployment-7c4fd9c879-xd5gp 9f7c96a8-038d-4ffc-8e7a-52e9c6883c05 3610 0 2020-06-01 14:53:58 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [{apps/v1 ReplicaSet test-rollover-deployment-7c4fd9c879 9cc87c24-f457-4b06-bc4f-2f063f4f25ab 0xc001fdfba7 0xc001fdfba8}] []  [{kube-controller-manager Update v1 2020-06-01 14:53:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9cc87c24-f457-4b06-bc4f-2f063f4f25ab\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-01 14:54:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.33\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jtn5w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jtn5w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jtn5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 14:53:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 14:54:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 14:54:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 14:53:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.2.33,StartTime:2020-06-01 14:53:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-01 14:54:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://7ad4f1a78d688a3c052c27af8356e7531bbb9ec6603bd789708c97a91a58872b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.33,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Jun  1 14:54:10.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4489" for this suite.
•{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":292,"completed":31,"skipped":416,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
Jun  1 14:54:14.679: INFO: Initial restart count of pod test-webserver-53373e7f-0a6a-4dc7-a12b-21e0baf04db2 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Jun  1 14:58:15.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7822" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":292,"completed":32,"skipped":466,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-map-6f64f04e-a59b-45f4-a436-f4842ef065a9
STEP: Creating a pod to test consume secrets
Jun  1 14:58:15.251: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-dca7ec75-cc86-47ef-8d99-06183c5b023c" in namespace "projected-2322" to be "Succeeded or Failed"
Jun  1 14:58:15.254: INFO: Pod "pod-projected-secrets-dca7ec75-cc86-47ef-8d99-06183c5b023c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.292221ms
Jun  1 14:58:17.259: INFO: Pod "pod-projected-secrets-dca7ec75-cc86-47ef-8d99-06183c5b023c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008056567s
Jun  1 14:58:19.264: INFO: Pod "pod-projected-secrets-dca7ec75-cc86-47ef-8d99-06183c5b023c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013270306s
STEP: Saw pod success
Jun  1 14:58:19.264: INFO: Pod "pod-projected-secrets-dca7ec75-cc86-47ef-8d99-06183c5b023c" satisfied condition "Succeeded or Failed"
Jun  1 14:58:19.267: INFO: Trying to get logs from node kind-worker pod pod-projected-secrets-dca7ec75-cc86-47ef-8d99-06183c5b023c container projected-secret-volume-test: <nil>
STEP: delete the pod
Jun  1 14:58:19.294: INFO: Waiting for pod pod-projected-secrets-dca7ec75-cc86-47ef-8d99-06183c5b023c to disappear
Jun  1 14:58:19.297: INFO: Pod pod-projected-secrets-dca7ec75-cc86-47ef-8d99-06183c5b023c no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Jun  1 14:58:19.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2322" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":33,"skipped":495,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 14:58:19.342: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d9e30d86-3716-4c9a-8b8b-e0b8a01192b5" in namespace "downward-api-1295" to be "Succeeded or Failed"
Jun  1 14:58:19.346: INFO: Pod "downwardapi-volume-d9e30d86-3716-4c9a-8b8b-e0b8a01192b5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.247793ms
Jun  1 14:58:21.354: INFO: Pod "downwardapi-volume-d9e30d86-3716-4c9a-8b8b-e0b8a01192b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011474743s
Jun  1 14:58:23.358: INFO: Pod "downwardapi-volume-d9e30d86-3716-4c9a-8b8b-e0b8a01192b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016027936s
STEP: Saw pod success
Jun  1 14:58:23.359: INFO: Pod "downwardapi-volume-d9e30d86-3716-4c9a-8b8b-e0b8a01192b5" satisfied condition "Succeeded or Failed"
Jun  1 14:58:23.362: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-d9e30d86-3716-4c9a-8b8b-e0b8a01192b5 container client-container: <nil>
STEP: delete the pod
Jun  1 14:58:23.377: INFO: Waiting for pod downwardapi-volume-d9e30d86-3716-4c9a-8b8b-e0b8a01192b5 to disappear
Jun  1 14:58:23.379: INFO: Pod downwardapi-volume-d9e30d86-3716-4c9a-8b8b-e0b8a01192b5 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Jun  1 14:58:23.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1295" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":34,"skipped":519,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-map-334e7654-7416-4f98-b0da-4cded6459203
STEP: Creating a pod to test consume secrets
Jun  1 14:58:23.430: INFO: Waiting up to 5m0s for pod "pod-secrets-74c15719-0989-463b-b2fa-75e869961c10" in namespace "secrets-6904" to be "Succeeded or Failed"
Jun  1 14:58:23.433: INFO: Pod "pod-secrets-74c15719-0989-463b-b2fa-75e869961c10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.73476ms
Jun  1 14:58:25.437: INFO: Pod "pod-secrets-74c15719-0989-463b-b2fa-75e869961c10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006366366s
Jun  1 14:58:27.441: INFO: Pod "pod-secrets-74c15719-0989-463b-b2fa-75e869961c10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010630382s
STEP: Saw pod success
Jun  1 14:58:27.441: INFO: Pod "pod-secrets-74c15719-0989-463b-b2fa-75e869961c10" satisfied condition "Succeeded or Failed"
Jun  1 14:58:27.444: INFO: Trying to get logs from node kind-worker pod pod-secrets-74c15719-0989-463b-b2fa-75e869961c10 container secret-volume-test: <nil>
STEP: delete the pod
Jun  1 14:58:27.461: INFO: Waiting for pod pod-secrets-74c15719-0989-463b-b2fa-75e869961c10 to disappear
Jun  1 14:58:27.464: INFO: Pod pod-secrets-74c15719-0989-463b-b2fa-75e869961c10 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Jun  1 14:58:27.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6904" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":292,"completed":35,"skipped":535,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 14:58:46.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8079" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":292,"completed":36,"skipped":550,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Jun  1 14:58:50.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1393" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":37,"skipped":598,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 19 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Jun  1 14:59:06.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6214" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":292,"completed":38,"skipped":621,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 14:59:06.711: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f94aa4b8-fab5-4d92-8d90-9d0c706c43fb" in namespace "projected-7833" to be "Succeeded or Failed"
Jun  1 14:59:06.717: INFO: Pod "downwardapi-volume-f94aa4b8-fab5-4d92-8d90-9d0c706c43fb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.176042ms
Jun  1 14:59:08.720: INFO: Pod "downwardapi-volume-f94aa4b8-fab5-4d92-8d90-9d0c706c43fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009222918s
Jun  1 14:59:10.724: INFO: Pod "downwardapi-volume-f94aa4b8-fab5-4d92-8d90-9d0c706c43fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012615066s
STEP: Saw pod success
Jun  1 14:59:10.724: INFO: Pod "downwardapi-volume-f94aa4b8-fab5-4d92-8d90-9d0c706c43fb" satisfied condition "Succeeded or Failed"
Jun  1 14:59:10.727: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-f94aa4b8-fab5-4d92-8d90-9d0c706c43fb container client-container: <nil>
STEP: delete the pod
Jun  1 14:59:10.743: INFO: Waiting for pod downwardapi-volume-f94aa4b8-fab5-4d92-8d90-9d0c706c43fb to disappear
Jun  1 14:59:10.746: INFO: Pod downwardapi-volume-f94aa4b8-fab5-4d92-8d90-9d0c706c43fb no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Jun  1 14:59:10.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7833" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":292,"completed":39,"skipped":663,"failed":0}
SSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Jun  1 14:59:10.753: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Jun  1 14:59:10.786: INFO: Waiting up to 5m0s for pod "downward-api-cd1e54f5-0aab-4b31-a4b9-768057faf9a8" in namespace "downward-api-4528" to be "Succeeded or Failed"
Jun  1 14:59:10.788: INFO: Pod "downward-api-cd1e54f5-0aab-4b31-a4b9-768057faf9a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.184682ms
Jun  1 14:59:12.798: INFO: Pod "downward-api-cd1e54f5-0aab-4b31-a4b9-768057faf9a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012107734s
Jun  1 14:59:14.804: INFO: Pod "downward-api-cd1e54f5-0aab-4b31-a4b9-768057faf9a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017827923s
STEP: Saw pod success
Jun  1 14:59:14.804: INFO: Pod "downward-api-cd1e54f5-0aab-4b31-a4b9-768057faf9a8" satisfied condition "Succeeded or Failed"
Jun  1 14:59:14.807: INFO: Trying to get logs from node kind-worker pod downward-api-cd1e54f5-0aab-4b31-a4b9-768057faf9a8 container dapi-container: <nil>
STEP: delete the pod
Jun  1 14:59:14.823: INFO: Waiting for pod downward-api-cd1e54f5-0aab-4b31-a4b9-768057faf9a8 to disappear
Jun  1 14:59:14.826: INFO: Pod downward-api-cd1e54f5-0aab-4b31-a4b9-768057faf9a8 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Jun  1 14:59:14.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4528" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":292,"completed":40,"skipped":666,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] PreStop
... skipping 26 lines ...
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  test/e2e/framework/framework.go:175
Jun  1 14:59:27.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-5564" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":292,"completed":41,"skipped":675,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-92257bce-e063-4c0e-a3c9-c1469da0c4dc
STEP: Creating a pod to test consume configMaps
Jun  1 14:59:27.962: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f5af3bb0-a36d-4732-aa7e-e22447bcd341" in namespace "projected-310" to be "Succeeded or Failed"
Jun  1 14:59:27.966: INFO: Pod "pod-projected-configmaps-f5af3bb0-a36d-4732-aa7e-e22447bcd341": Phase="Pending", Reason="", readiness=false. Elapsed: 4.378258ms
Jun  1 14:59:29.971: INFO: Pod "pod-projected-configmaps-f5af3bb0-a36d-4732-aa7e-e22447bcd341": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009327331s
Jun  1 14:59:31.978: INFO: Pod "pod-projected-configmaps-f5af3bb0-a36d-4732-aa7e-e22447bcd341": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015489396s
STEP: Saw pod success
Jun  1 14:59:31.978: INFO: Pod "pod-projected-configmaps-f5af3bb0-a36d-4732-aa7e-e22447bcd341" satisfied condition "Succeeded or Failed"
Jun  1 14:59:31.981: INFO: Trying to get logs from node kind-worker pod pod-projected-configmaps-f5af3bb0-a36d-4732-aa7e-e22447bcd341 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 14:59:31.995: INFO: Waiting for pod pod-projected-configmaps-f5af3bb0-a36d-4732-aa7e-e22447bcd341 to disappear
Jun  1 14:59:31.998: INFO: Pod pod-projected-configmaps-f5af3bb0-a36d-4732-aa7e-e22447bcd341 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Jun  1 14:59:31.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-310" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":292,"completed":42,"skipped":696,"failed":0}
SS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
Jun  1 14:59:32.038: INFO: >>> kubeConfig: /root/.kube/kind-test-config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 14:59:33.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4676" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":292,"completed":43,"skipped":698,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 39 lines ...
Jun  1 15:00:53.416: INFO: Waiting for statefulset status.replicas updated to 0
Jun  1 15:00:53.419: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Jun  1 15:00:53.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-98" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":292,"completed":44,"skipped":740,"failed":0}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 25 lines ...
  test/e2e/framework/framework.go:175
Jun  1 15:00:59.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1458" for this suite.
STEP: Destroying namespace "webhook-1458-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":292,"completed":45,"skipped":744,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Jun  1 15:00:59.496: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-bd8e7989-d42c-4295-9285-28ee4cdfee66" in namespace "security-context-test-5341" to be "Succeeded or Failed"
Jun  1 15:00:59.503: INFO: Pod "busybox-readonly-false-bd8e7989-d42c-4295-9285-28ee4cdfee66": Phase="Pending", Reason="", readiness=false. Elapsed: 6.866902ms
Jun  1 15:01:01.507: INFO: Pod "busybox-readonly-false-bd8e7989-d42c-4295-9285-28ee4cdfee66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011299781s
Jun  1 15:01:03.511: INFO: Pod "busybox-readonly-false-bd8e7989-d42c-4295-9285-28ee4cdfee66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015074599s
Jun  1 15:01:03.511: INFO: Pod "busybox-readonly-false-bd8e7989-d42c-4295-9285-28ee4cdfee66" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Jun  1 15:01:03.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5341" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":292,"completed":46,"skipped":788,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 15:01:10.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-8205" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/crd_conversion_webhook.go:137
•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":292,"completed":47,"skipped":795,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 49 lines ...
Jun  1 15:01:31.182: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9721/pods","resourceVersion":"5905"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Jun  1 15:01:31.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9721" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":292,"completed":48,"skipped":825,"failed":0}
SSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Service endpoints latency
... skipping 418 lines ...
Jun  1 15:01:42.957: INFO: 99 %ile: 765.414754ms
Jun  1 15:01:42.957: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  test/e2e/framework/framework.go:175
Jun  1 15:01:42.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-6581" for this suite.
•{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":292,"completed":49,"skipped":830,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-downwardapi-2cbm
STEP: Creating a pod to test atomic-volume-subpath
Jun  1 15:01:43.026: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-2cbm" in namespace "subpath-5666" to be "Succeeded or Failed"
Jun  1 15:01:43.030: INFO: Pod "pod-subpath-test-downwardapi-2cbm": Phase="Pending", Reason="", readiness=false. Elapsed: 3.281994ms
Jun  1 15:01:45.033: INFO: Pod "pod-subpath-test-downwardapi-2cbm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006769344s
Jun  1 15:01:47.037: INFO: Pod "pod-subpath-test-downwardapi-2cbm": Phase="Running", Reason="", readiness=true. Elapsed: 4.010667929s
Jun  1 15:01:49.042: INFO: Pod "pod-subpath-test-downwardapi-2cbm": Phase="Running", Reason="", readiness=true. Elapsed: 6.015128614s
Jun  1 15:01:51.044: INFO: Pod "pod-subpath-test-downwardapi-2cbm": Phase="Running", Reason="", readiness=true. Elapsed: 8.017657567s
Jun  1 15:01:53.063: INFO: Pod "pod-subpath-test-downwardapi-2cbm": Phase="Running", Reason="", readiness=true. Elapsed: 10.036556504s
... skipping 2 lines ...
Jun  1 15:01:59.074: INFO: Pod "pod-subpath-test-downwardapi-2cbm": Phase="Running", Reason="", readiness=true. Elapsed: 16.047413713s
Jun  1 15:02:01.078: INFO: Pod "pod-subpath-test-downwardapi-2cbm": Phase="Running", Reason="", readiness=true. Elapsed: 18.051124178s
Jun  1 15:02:03.081: INFO: Pod "pod-subpath-test-downwardapi-2cbm": Phase="Running", Reason="", readiness=true. Elapsed: 20.054687608s
Jun  1 15:02:05.085: INFO: Pod "pod-subpath-test-downwardapi-2cbm": Phase="Running", Reason="", readiness=true. Elapsed: 22.058865223s
Jun  1 15:02:07.090: INFO: Pod "pod-subpath-test-downwardapi-2cbm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.063114771s
STEP: Saw pod success
Jun  1 15:02:07.090: INFO: Pod "pod-subpath-test-downwardapi-2cbm" satisfied condition "Succeeded or Failed"
Jun  1 15:02:07.092: INFO: Trying to get logs from node kind-worker pod pod-subpath-test-downwardapi-2cbm container test-container-subpath-downwardapi-2cbm: <nil>
STEP: delete the pod
Jun  1 15:02:07.123: INFO: Waiting for pod pod-subpath-test-downwardapi-2cbm to disappear
Jun  1 15:02:07.126: INFO: Pod pod-subpath-test-downwardapi-2cbm no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-2cbm
Jun  1 15:02:07.126: INFO: Deleting pod "pod-subpath-test-downwardapi-2cbm" in namespace "subpath-5666"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Jun  1 15:02:07.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5666" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":292,"completed":50,"skipped":864,"failed":0}
SSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 42 lines ...
• [SLOW TEST:308.139 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":292,"completed":51,"skipped":872,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Jun  1 15:07:15.275: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in container's args
Jun  1 15:07:15.308: INFO: Waiting up to 5m0s for pod "var-expansion-9c733639-0e7d-453c-9745-11de2c857137" in namespace "var-expansion-6142" to be "Succeeded or Failed"
Jun  1 15:07:15.312: INFO: Pod "var-expansion-9c733639-0e7d-453c-9745-11de2c857137": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434146ms
Jun  1 15:07:17.317: INFO: Pod "var-expansion-9c733639-0e7d-453c-9745-11de2c857137": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009481864s
STEP: Saw pod success
Jun  1 15:07:17.317: INFO: Pod "var-expansion-9c733639-0e7d-453c-9745-11de2c857137" satisfied condition "Succeeded or Failed"
Jun  1 15:07:17.321: INFO: Trying to get logs from node kind-worker pod var-expansion-9c733639-0e7d-453c-9745-11de2c857137 container dapi-container: <nil>
STEP: delete the pod
Jun  1 15:07:17.351: INFO: Waiting for pod var-expansion-9c733639-0e7d-453c-9745-11de2c857137 to disappear
Jun  1 15:07:17.353: INFO: Pod var-expansion-9c733639-0e7d-453c-9745-11de2c857137 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Jun  1 15:07:17.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6142" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":292,"completed":52,"skipped":891,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 100 lines ...
Jun  1 15:08:40.843: INFO: Waiting for statefulset status.replicas updated to 0
Jun  1 15:08:40.846: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Jun  1 15:08:40.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4367" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":292,"completed":53,"skipped":913,"failed":0}
S
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 22 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 15:08:50.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7014" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":292,"completed":54,"skipped":914,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 8 lines ...
Jun  1 15:08:50.088: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jun  1 15:08:52.315: INFO: >>> kubeConfig: /root/.kube/kind-test-config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 15:09:05.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8804" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":292,"completed":55,"skipped":920,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] Events
... skipping 16 lines ...
Jun  1 15:09:11.786: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  test/e2e/framework/framework.go:175
Jun  1 15:09:11.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-1006" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":292,"completed":56,"skipped":944,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 15:09:26.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4494" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":292,"completed":57,"skipped":955,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 33 lines ...

W0601 15:09:36.533737   11956 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Jun  1 15:09:36.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6036" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":292,"completed":58,"skipped":966,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 28 lines ...
Jun  1 15:09:40.594: INFO: Selector matched 1 pods for map[app:agnhost]
Jun  1 15:09:40.594: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 15:09:40.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8837" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":292,"completed":59,"skipped":979,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 13 lines ...
Jun  1 15:09:42.658: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
Jun  1 15:09:42.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8413" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":292,"completed":60,"skipped":987,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
... skipping 18 lines ...
STEP: Deleting second CR
Jun  1 15:10:33.356: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-01T15:09:53Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-06-01T15:10:13Z]] name:name2 resourceVersion:9893 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:e90e672b-6d69-4878-9b61-304f21ecdc94] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 15:10:43.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-5891" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":292,"completed":61,"skipped":1039,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 26 lines ...
Jun  1 15:11:33.955: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-5407 /api/v1/namespaces/watch-5407/configmaps/e2e-watch-test-configmap-b 7f56e859-5b48-43f8-b223-d205318e9831 10087 0 2020-06-01 15:11:23 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-06-01 15:11:23 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Jun  1 15:11:33.955: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-5407 /api/v1/namespaces/watch-5407/configmaps/e2e-watch-test-configmap-b 7f56e859-5b48-43f8-b223-d205318e9831 10087 0 2020-06-01 15:11:23 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-06-01 15:11:23 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Jun  1 15:11:43.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5407" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":292,"completed":62,"skipped":1066,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 34 lines ...

W0601 15:11:45.033767   11956 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Jun  1 15:11:45.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7766" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":292,"completed":63,"skipped":1080,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 67 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
Jun  1 15:11:50.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9113" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":292,"completed":64,"skipped":1096,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 38 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
Jun  1 15:11:54.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1494" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":292,"completed":65,"skipped":1108,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  test/e2e/framework/framework.go:175
Jun  1 15:12:01.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4106" for this suite.
STEP: Destroying namespace "webhook-4106-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":292,"completed":66,"skipped":1113,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 15:12:07.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7559" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":292,"completed":67,"skipped":1128,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Jun  1 15:12:07.591: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-83064d25-0167-4bab-9907-ba68bdde2c59" in namespace "security-context-test-4260" to be "Succeeded or Failed"
Jun  1 15:12:07.600: INFO: Pod "alpine-nnp-false-83064d25-0167-4bab-9907-ba68bdde2c59": Phase="Pending", Reason="", readiness=false. Elapsed: 9.122956ms
Jun  1 15:12:09.604: INFO: Pod "alpine-nnp-false-83064d25-0167-4bab-9907-ba68bdde2c59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012758987s
Jun  1 15:12:11.607: INFO: Pod "alpine-nnp-false-83064d25-0167-4bab-9907-ba68bdde2c59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016201555s
Jun  1 15:12:11.607: INFO: Pod "alpine-nnp-false-83064d25-0167-4bab-9907-ba68bdde2c59" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Jun  1 15:12:11.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-4260" for this suite.
•{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":68,"skipped":1148,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
  test/e2e/framework/framework.go:175
Jun  1 15:12:17.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7945" for this suite.
STEP: Destroying namespace "webhook-7945-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":292,"completed":69,"skipped":1164,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-configmap-jk5x
STEP: Creating a pod to test atomic-volume-subpath
Jun  1 15:12:17.507: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-jk5x" in namespace "subpath-284" to be "Succeeded or Failed"
Jun  1 15:12:17.511: INFO: Pod "pod-subpath-test-configmap-jk5x": Phase="Pending", Reason="", readiness=false. Elapsed: 3.851206ms
Jun  1 15:12:19.515: INFO: Pod "pod-subpath-test-configmap-jk5x": Phase="Running", Reason="", readiness=true. Elapsed: 2.007782509s
Jun  1 15:12:21.520: INFO: Pod "pod-subpath-test-configmap-jk5x": Phase="Running", Reason="", readiness=true. Elapsed: 4.012777129s
Jun  1 15:12:23.524: INFO: Pod "pod-subpath-test-configmap-jk5x": Phase="Running", Reason="", readiness=true. Elapsed: 6.01624783s
Jun  1 15:12:25.528: INFO: Pod "pod-subpath-test-configmap-jk5x": Phase="Running", Reason="", readiness=true. Elapsed: 8.021084502s
Jun  1 15:12:27.534: INFO: Pod "pod-subpath-test-configmap-jk5x": Phase="Running", Reason="", readiness=true. Elapsed: 10.026592576s
... skipping 2 lines ...
Jun  1 15:12:33.547: INFO: Pod "pod-subpath-test-configmap-jk5x": Phase="Running", Reason="", readiness=true. Elapsed: 16.039976506s
Jun  1 15:12:35.553: INFO: Pod "pod-subpath-test-configmap-jk5x": Phase="Running", Reason="", readiness=true. Elapsed: 18.045484078s
Jun  1 15:12:37.556: INFO: Pod "pod-subpath-test-configmap-jk5x": Phase="Running", Reason="", readiness=true. Elapsed: 20.048926473s
Jun  1 15:12:39.559: INFO: Pod "pod-subpath-test-configmap-jk5x": Phase="Running", Reason="", readiness=true. Elapsed: 22.052019343s
Jun  1 15:12:41.563: INFO: Pod "pod-subpath-test-configmap-jk5x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.055847377s
STEP: Saw pod success
Jun  1 15:12:41.563: INFO: Pod "pod-subpath-test-configmap-jk5x" satisfied condition "Succeeded or Failed"
Jun  1 15:12:41.566: INFO: Trying to get logs from node kind-worker pod pod-subpath-test-configmap-jk5x container test-container-subpath-configmap-jk5x: <nil>
STEP: delete the pod
Jun  1 15:12:41.583: INFO: Waiting for pod pod-subpath-test-configmap-jk5x to disappear
Jun  1 15:12:41.585: INFO: Pod pod-subpath-test-configmap-jk5x no longer exists
STEP: Deleting pod pod-subpath-test-configmap-jk5x
Jun  1 15:12:41.585: INFO: Deleting pod "pod-subpath-test-configmap-jk5x" in namespace "subpath-284"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Jun  1 15:12:41.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-284" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":292,"completed":70,"skipped":1180,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicaSet
... skipping 12 lines ...
Jun  1 15:12:46.644: INFO: Trying to dial the pod
Jun  1 15:12:51.654: INFO: Controller my-hostname-basic-3226d499-46e6-4d45-b9bf-46040ea8574e: Got expected result from replica 1 [my-hostname-basic-3226d499-46e6-4d45-b9bf-46040ea8574e-qfrnv]: "my-hostname-basic-3226d499-46e6-4d45-b9bf-46040ea8574e-qfrnv", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  test/e2e/framework/framework.go:175
Jun  1 15:12:51.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-941" for this suite.
•{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":292,"completed":71,"skipped":1199,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
Jun  1 15:12:57.412: INFO: stderr: ""
Jun  1 15:12:57.413: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4167-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t<string>\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t<string>\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t<Object>\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t<map[string]>\n     Specification of Waldo\n\n   status\t<Object>\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 15:13:00.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9017" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":292,"completed":72,"skipped":1208,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 71 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 15:13:26.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4957" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":292,"completed":73,"skipped":1221,"failed":0}
SSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
... skipping 42 lines ...
Jun  1 15:13:33.961: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jun  1 15:13:34.125: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  test/e2e/framework/framework.go:175
Jun  1 15:13:34.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-4365" for this suite.
•{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":74,"skipped":1226,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Jun  1 15:13:34.365: INFO: stderr: ""
Jun  1 15:13:34.365: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-beta.0.318+b618411f1edb98\", GitCommit:\"b618411f1edb98afe807866fb3a607356d72ba24\", GitTreeState:\"clean\", BuildDate:\"2020-06-01T06:57:55Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-beta.0.318+b618411f1edb98\", GitCommit:\"b618411f1edb98afe807866fb3a607356d72ba24\", GitTreeState:\"clean\", BuildDate:\"2020-06-01T06:57:55Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 15:13:34.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-250" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":292,"completed":75,"skipped":1227,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 24 lines ...
  test/e2e/framework/framework.go:175
Jun  1 15:13:44.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1906" for this suite.
STEP: Destroying namespace "webhook-1906-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":292,"completed":76,"skipped":1260,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 17 lines ...
Jun  1 15:13:52.541: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jun  1 15:13:52.545: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
Jun  1 15:13:52.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-555" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":292,"completed":77,"skipped":1308,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 57 lines ...
Jun  1 15:16:14.431: INFO: Waiting for statefulset status.replicas updated to 0
Jun  1 15:16:14.435: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Jun  1 15:16:14.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6217" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":292,"completed":78,"skipped":1330,"failed":0}
SS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 13 lines ...
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Jun  1 15:16:31.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8691" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":292,"completed":79,"skipped":1332,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Jun  1 15:16:31.536: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Jun  1 15:16:31.584: INFO: Waiting up to 5m0s for pod "downward-api-276f53b3-e304-4a28-9382-6ae5739602de" in namespace "downward-api-1833" to be "Succeeded or Failed"
Jun  1 15:16:31.587: INFO: Pod "downward-api-276f53b3-e304-4a28-9382-6ae5739602de": Phase="Pending", Reason="", readiness=false. Elapsed: 3.178391ms
Jun  1 15:16:33.591: INFO: Pod "downward-api-276f53b3-e304-4a28-9382-6ae5739602de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006906394s
Jun  1 15:16:35.595: INFO: Pod "downward-api-276f53b3-e304-4a28-9382-6ae5739602de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011221558s
STEP: Saw pod success
Jun  1 15:16:35.595: INFO: Pod "downward-api-276f53b3-e304-4a28-9382-6ae5739602de" satisfied condition "Succeeded or Failed"
Jun  1 15:16:35.598: INFO: Trying to get logs from node kind-worker pod downward-api-276f53b3-e304-4a28-9382-6ae5739602de container dapi-container: <nil>
STEP: delete the pod
Jun  1 15:16:35.626: INFO: Waiting for pod downward-api-276f53b3-e304-4a28-9382-6ae5739602de to disappear
Jun  1 15:16:35.629: INFO: Pod downward-api-276f53b3-e304-4a28-9382-6ae5739602de no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Jun  1 15:16:35.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1833" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":292,"completed":80,"skipped":1344,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 8 lines ...
Jun  1 15:16:35.669: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jun  1 15:16:38.622: INFO: >>> kubeConfig: /root/.kube/kind-test-config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 15:16:50.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8157" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":292,"completed":81,"skipped":1357,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-6490daa2-1372-4ebc-a8d1-4da80efa037e
STEP: Creating a pod to test consume secrets
Jun  1 15:16:50.070: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-15d1fddf-fe65-440f-9e56-fc62c2872aed" in namespace "projected-9642" to be "Succeeded or Failed"
Jun  1 15:16:50.072: INFO: Pod "pod-projected-secrets-15d1fddf-fe65-440f-9e56-fc62c2872aed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057452ms
Jun  1 15:16:52.076: INFO: Pod "pod-projected-secrets-15d1fddf-fe65-440f-9e56-fc62c2872aed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005745987s
STEP: Saw pod success
Jun  1 15:16:52.076: INFO: Pod "pod-projected-secrets-15d1fddf-fe65-440f-9e56-fc62c2872aed" satisfied condition "Succeeded or Failed"
Jun  1 15:16:52.079: INFO: Trying to get logs from node kind-worker pod pod-projected-secrets-15d1fddf-fe65-440f-9e56-fc62c2872aed container projected-secret-volume-test: <nil>
STEP: delete the pod
Jun  1 15:16:52.093: INFO: Waiting for pod pod-projected-secrets-15d1fddf-fe65-440f-9e56-fc62c2872aed to disappear
Jun  1 15:16:52.095: INFO: Pod pod-projected-secrets-15d1fddf-fe65-440f-9e56-fc62c2872aed no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Jun  1 15:16:52.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9642" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":292,"completed":82,"skipped":1386,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should patch a secret [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
... skipping 10 lines ...
STEP: deleting the secret using a LabelSelector
STEP: listing secrets in all namespaces, searching for label name and value in patch
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
Jun  1 15:16:52.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4235" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":292,"completed":83,"skipped":1396,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 25 lines ...
Jun  1 15:16:54.319: INFO: Pod "test-recreate-deployment-d5667d9c7-62rcp" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-62rcp test-recreate-deployment-d5667d9c7- deployment-8023 /api/v1/namespaces/deployment-8023/pods/test-recreate-deployment-d5667d9c7-62rcp d261de81-e9ae-444c-8a83-61ab5342188c 12343 0 2020-06-01 15:16:54 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 b6ef65d8-f5cd-4392-8150-f04cf902d625 0xc0035b56e0 0xc0035b56e1}] []  [{kube-controller-manager Update v1 2020-06-01 15:16:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b6ef65d8-f5cd-4392-8150-f04cf902d625\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-01 15:16:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-78vwz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-78vwz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-78vwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 15:16:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 15:16:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 15:16:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 15:16:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2020-06-01 15:16:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Jun  1 15:16:54.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8023" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":292,"completed":84,"skipped":1423,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-9ab17936-e32b-4507-b2a6-51a4bf383f50
STEP: Creating a pod to test consume configMaps
Jun  1 15:16:54.364: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-22ddfc88-a6fc-437e-943c-5cdf145eff58" in namespace "projected-6700" to be "Succeeded or Failed"
Jun  1 15:16:54.367: INFO: Pod "pod-projected-configmaps-22ddfc88-a6fc-437e-943c-5cdf145eff58": Phase="Pending", Reason="", readiness=false. Elapsed: 3.059436ms
Jun  1 15:16:56.375: INFO: Pod "pod-projected-configmaps-22ddfc88-a6fc-437e-943c-5cdf145eff58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010979286s
Jun  1 15:16:58.379: INFO: Pod "pod-projected-configmaps-22ddfc88-a6fc-437e-943c-5cdf145eff58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01490602s
STEP: Saw pod success
Jun  1 15:16:58.379: INFO: Pod "pod-projected-configmaps-22ddfc88-a6fc-437e-943c-5cdf145eff58" satisfied condition "Succeeded or Failed"
Jun  1 15:16:58.382: INFO: Trying to get logs from node kind-worker pod pod-projected-configmaps-22ddfc88-a6fc-437e-943c-5cdf145eff58 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 15:16:58.399: INFO: Waiting for pod pod-projected-configmaps-22ddfc88-a6fc-437e-943c-5cdf145eff58 to disappear
Jun  1 15:16:58.402: INFO: Pod pod-projected-configmaps-22ddfc88-a6fc-437e-943c-5cdf145eff58 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Jun  1 15:16:58.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6700" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":292,"completed":85,"skipped":1432,"failed":0}
SSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] version v1
... skipping 341 lines ...
Jun  1 15:17:06.792: INFO: Deleting ReplicationController proxy-service-jvpk4 took: 5.945312ms
Jun  1 15:17:06.892: INFO: Terminating ReplicationController proxy-service-jvpk4 pods took: 100.273322ms
[AfterEach] version v1
  test/e2e/framework/framework.go:175
Jun  1 15:17:09.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4064" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":292,"completed":86,"skipped":1435,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
... skipping 7 lines ...
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  test/e2e/framework/framework.go:175
Jun  1 15:17:09.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-5271" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":292,"completed":87,"skipped":1445,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 15:17:09.174: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bd363f0b-e00e-40d1-8510-e15a523b8c75" in namespace "downward-api-9191" to be "Succeeded or Failed"
Jun  1 15:17:09.178: INFO: Pod "downwardapi-volume-bd363f0b-e00e-40d1-8510-e15a523b8c75": Phase="Pending", Reason="", readiness=false. Elapsed: 3.306708ms
Jun  1 15:17:11.183: INFO: Pod "downwardapi-volume-bd363f0b-e00e-40d1-8510-e15a523b8c75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008666874s
STEP: Saw pod success
Jun  1 15:17:11.183: INFO: Pod "downwardapi-volume-bd363f0b-e00e-40d1-8510-e15a523b8c75" satisfied condition "Succeeded or Failed"
Jun  1 15:17:11.186: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-bd363f0b-e00e-40d1-8510-e15a523b8c75 container client-container: <nil>
STEP: delete the pod
Jun  1 15:17:11.201: INFO: Waiting for pod downwardapi-volume-bd363f0b-e00e-40d1-8510-e15a523b8c75 to disappear
Jun  1 15:17:11.204: INFO: Pod downwardapi-volume-bd363f0b-e00e-40d1-8510-e15a523b8c75 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Jun  1 15:17:11.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9191" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":292,"completed":88,"skipped":1463,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
Jun  1 15:17:11.240: INFO: >>> kubeConfig: /root/.kube/kind-test-config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 15:17:11.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3559" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":292,"completed":89,"skipped":1492,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath 
  runs ReplicaSets to verify preemption running path [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 32 lines ...
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  test/e2e/framework/framework.go:175
Jun  1 15:18:49.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-preemption-6736" for this suite.
[AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
  test/e2e/scheduling/preemption.go:75
•{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":292,"completed":90,"skipped":1504,"failed":0}
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 15:18:49.078: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e52d82f2-6043-4840-9078-7f3590f20729" in namespace "downward-api-2323" to be "Succeeded or Failed"
Jun  1 15:18:49.080: INFO: Pod "downwardapi-volume-e52d82f2-6043-4840-9078-7f3590f20729": Phase="Pending", Reason="", readiness=false. Elapsed: 2.254723ms
Jun  1 15:18:51.083: INFO: Pod "downwardapi-volume-e52d82f2-6043-4840-9078-7f3590f20729": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005747041s
Jun  1 15:18:53.089: INFO: Pod "downwardapi-volume-e52d82f2-6043-4840-9078-7f3590f20729": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01118924s
STEP: Saw pod success
Jun  1 15:18:53.089: INFO: Pod "downwardapi-volume-e52d82f2-6043-4840-9078-7f3590f20729" satisfied condition "Succeeded or Failed"
Jun  1 15:18:53.091: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-e52d82f2-6043-4840-9078-7f3590f20729 container client-container: <nil>
STEP: delete the pod
Jun  1 15:18:53.118: INFO: Waiting for pod downwardapi-volume-e52d82f2-6043-4840-9078-7f3590f20729 to disappear
Jun  1 15:18:53.120: INFO: Pod downwardapi-volume-e52d82f2-6043-4840-9078-7f3590f20729 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Jun  1 15:18:53.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2323" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":292,"completed":91,"skipped":1507,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 9 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 15:18:53.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9926" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":292,"completed":92,"skipped":1537,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 9 lines ...
STEP: Creating the pod
Jun  1 15:18:57.736: INFO: Successfully updated pod "annotationupdatea53b5722-f70a-4a6b-afd3-9f02604ab2b7"
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Jun  1 15:18:59.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9199" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":292,"completed":93,"skipped":1559,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 15:18:59.787: INFO: Waiting up to 5m0s for pod "downwardapi-volume-480b776b-0c0d-4417-9328-edb81d77c337" in namespace "downward-api-8864" to be "Succeeded or Failed"
Jun  1 15:18:59.790: INFO: Pod "downwardapi-volume-480b776b-0c0d-4417-9328-edb81d77c337": Phase="Pending", Reason="", readiness=false. Elapsed: 2.734623ms
Jun  1 15:19:01.794: INFO: Pod "downwardapi-volume-480b776b-0c0d-4417-9328-edb81d77c337": Phase="Running", Reason="", readiness=true. Elapsed: 2.006651777s
Jun  1 15:19:03.798: INFO: Pod "downwardapi-volume-480b776b-0c0d-4417-9328-edb81d77c337": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010696459s
STEP: Saw pod success
Jun  1 15:19:03.798: INFO: Pod "downwardapi-volume-480b776b-0c0d-4417-9328-edb81d77c337" satisfied condition "Succeeded or Failed"
Jun  1 15:19:03.800: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-480b776b-0c0d-4417-9328-edb81d77c337 container client-container: <nil>
STEP: delete the pod
Jun  1 15:19:03.815: INFO: Waiting for pod downwardapi-volume-480b776b-0c0d-4417-9328-edb81d77c337 to disappear
Jun  1 15:19:03.818: INFO: Pod downwardapi-volume-480b776b-0c0d-4417-9328-edb81d77c337 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Jun  1 15:19:03.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8864" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":292,"completed":94,"skipped":1627,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 15:19:03.854: INFO: Waiting up to 5m0s for pod "downwardapi-volume-64759420-2cb7-491a-856b-7128dc7e66f8" in namespace "downward-api-5753" to be "Succeeded or Failed"
Jun  1 15:19:03.856: INFO: Pod "downwardapi-volume-64759420-2cb7-491a-856b-7128dc7e66f8": Phase="Pending", Reason="", readiness=false. Elapsed: 1.987929ms
Jun  1 15:19:05.860: INFO: Pod "downwardapi-volume-64759420-2cb7-491a-856b-7128dc7e66f8": Phase="Running", Reason="", readiness=true. Elapsed: 2.00564231s
Jun  1 15:19:07.863: INFO: Pod "downwardapi-volume-64759420-2cb7-491a-856b-7128dc7e66f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009525181s
STEP: Saw pod success
Jun  1 15:19:07.863: INFO: Pod "downwardapi-volume-64759420-2cb7-491a-856b-7128dc7e66f8" satisfied condition "Succeeded or Failed"
Jun  1 15:19:07.866: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-64759420-2cb7-491a-856b-7128dc7e66f8 container client-container: <nil>
STEP: delete the pod
Jun  1 15:19:07.881: INFO: Waiting for pod downwardapi-volume-64759420-2cb7-491a-856b-7128dc7e66f8 to disappear
Jun  1 15:19:07.883: INFO: Pod downwardapi-volume-64759420-2cb7-491a-856b-7128dc7e66f8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Jun  1 15:19:07.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5753" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":95,"skipped":1645,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  test/e2e/framework/framework.go:175
Jun  1 15:19:11.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6297" for this suite.
STEP: Destroying namespace "webhook-6297-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":292,"completed":96,"skipped":1646,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 24 lines ...
  test/e2e/framework/framework.go:175
Jun  1 15:19:18.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2704" for this suite.
STEP: Destroying namespace "webhook-2704-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":292,"completed":97,"skipped":1659,"failed":0}
SSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 13 lines ...
Jun  1 15:20:14.923: INFO: Restart count of pod container-probe-8568/busybox-8a530c2c-8c0c-481d-a548-8aa47986dc20 is now 1 (54.111317164s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Jun  1 15:20:14.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8568" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":292,"completed":98,"skipped":1665,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 11 lines ...
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 15:20:32.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3925" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":292,"completed":99,"skipped":1685,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 27 lines ...
Jun  1 15:21:36.384: INFO: Terminating ReplicationController wrapped-volume-race-842803b5-6744-4a2f-b573-fd1efa6112f6 pods took: 100.251522ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  test/e2e/framework/framework.go:175
Jun  1 15:21:51.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-6141" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":292,"completed":100,"skipped":1687,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-cdf8389c-c6c6-46a2-8031-abc535167131
STEP: Creating a pod to test consume configMaps
Jun  1 15:21:51.612: INFO: Waiting up to 5m0s for pod "pod-configmaps-b3cec00c-5adc-4eb6-9d92-f27d6291e0f1" in namespace "configmap-4203" to be "Succeeded or Failed"
Jun  1 15:21:51.615: INFO: Pod "pod-configmaps-b3cec00c-5adc-4eb6-9d92-f27d6291e0f1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.550688ms
Jun  1 15:21:53.619: INFO: Pod "pod-configmaps-b3cec00c-5adc-4eb6-9d92-f27d6291e0f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007558918s
Jun  1 15:21:55.624: INFO: Pod "pod-configmaps-b3cec00c-5adc-4eb6-9d92-f27d6291e0f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012523157s
STEP: Saw pod success
Jun  1 15:21:55.624: INFO: Pod "pod-configmaps-b3cec00c-5adc-4eb6-9d92-f27d6291e0f1" satisfied condition "Succeeded or Failed"
Jun  1 15:21:55.627: INFO: Trying to get logs from node kind-worker pod pod-configmaps-b3cec00c-5adc-4eb6-9d92-f27d6291e0f1 container configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 15:21:55.648: INFO: Waiting for pod pod-configmaps-b3cec00c-5adc-4eb6-9d92-f27d6291e0f1 to disappear
Jun  1 15:21:55.650: INFO: Pod pod-configmaps-b3cec00c-5adc-4eb6-9d92-f27d6291e0f1 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 15:21:55.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4203" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":292,"completed":101,"skipped":1707,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 17 lines ...
Jun  1 15:21:55.701: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-2043 /api/v1/namespaces/watch-2043/configmaps/e2e-watch-test-watch-closed af80b8ca-567c-43fe-8810-cacc25b3fe09 14583 0 2020-06-01 15:21:55 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-06-01 15:21:55 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Jun  1 15:21:55.702: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-2043 /api/v1/namespaces/watch-2043/configmaps/e2e-watch-test-watch-closed af80b8ca-567c-43fe-8810-cacc25b3fe09 14584 0 2020-06-01 15:21:55 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-06-01 15:21:55 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Jun  1 15:21:55.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2043" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":292,"completed":102,"skipped":1733,"failed":0}
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-configmap-sxvd
STEP: Creating a pod to test atomic-volume-subpath
Jun  1 15:21:55.742: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-sxvd" in namespace "subpath-125" to be "Succeeded or Failed"
Jun  1 15:21:55.744: INFO: Pod "pod-subpath-test-configmap-sxvd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.001951ms
Jun  1 15:21:57.749: INFO: Pod "pod-subpath-test-configmap-sxvd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006483489s
Jun  1 15:21:59.754: INFO: Pod "pod-subpath-test-configmap-sxvd": Phase="Running", Reason="", readiness=true. Elapsed: 4.011162662s
Jun  1 15:22:01.758: INFO: Pod "pod-subpath-test-configmap-sxvd": Phase="Running", Reason="", readiness=true. Elapsed: 6.015772573s
Jun  1 15:22:03.763: INFO: Pod "pod-subpath-test-configmap-sxvd": Phase="Running", Reason="", readiness=true. Elapsed: 8.02014311s
Jun  1 15:22:05.766: INFO: Pod "pod-subpath-test-configmap-sxvd": Phase="Running", Reason="", readiness=true. Elapsed: 10.023686186s
... skipping 2 lines ...
Jun  1 15:22:11.779: INFO: Pod "pod-subpath-test-configmap-sxvd": Phase="Running", Reason="", readiness=true. Elapsed: 16.036054397s
Jun  1 15:22:13.783: INFO: Pod "pod-subpath-test-configmap-sxvd": Phase="Running", Reason="", readiness=true. Elapsed: 18.040626419s
Jun  1 15:22:15.786: INFO: Pod "pod-subpath-test-configmap-sxvd": Phase="Running", Reason="", readiness=true. Elapsed: 20.043816843s
Jun  1 15:22:17.790: INFO: Pod "pod-subpath-test-configmap-sxvd": Phase="Running", Reason="", readiness=true. Elapsed: 22.047480809s
Jun  1 15:22:19.796: INFO: Pod "pod-subpath-test-configmap-sxvd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.05370489s
STEP: Saw pod success
Jun  1 15:22:19.796: INFO: Pod "pod-subpath-test-configmap-sxvd" satisfied condition "Succeeded or Failed"
Jun  1 15:22:19.799: INFO: Trying to get logs from node kind-worker pod pod-subpath-test-configmap-sxvd container test-container-subpath-configmap-sxvd: <nil>
STEP: delete the pod
Jun  1 15:22:19.815: INFO: Waiting for pod pod-subpath-test-configmap-sxvd to disappear
Jun  1 15:22:19.817: INFO: Pod pod-subpath-test-configmap-sxvd no longer exists
STEP: Deleting pod pod-subpath-test-configmap-sxvd
Jun  1 15:22:19.817: INFO: Deleting pod "pod-subpath-test-configmap-sxvd" in namespace "subpath-125"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Jun  1 15:22:19.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-125" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":292,"completed":103,"skipped":1736,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 15:22:19.859: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1c649fa0-c0e2-4640-8a7f-a7f571d303c5" in namespace "projected-1466" to be "Succeeded or Failed"
Jun  1 15:22:19.862: INFO: Pod "downwardapi-volume-1c649fa0-c0e2-4640-8a7f-a7f571d303c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.986036ms
Jun  1 15:22:21.866: INFO: Pod "downwardapi-volume-1c649fa0-c0e2-4640-8a7f-a7f571d303c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006713751s
Jun  1 15:22:23.871: INFO: Pod "downwardapi-volume-1c649fa0-c0e2-4640-8a7f-a7f571d303c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011392666s
STEP: Saw pod success
Jun  1 15:22:23.871: INFO: Pod "downwardapi-volume-1c649fa0-c0e2-4640-8a7f-a7f571d303c5" satisfied condition "Succeeded or Failed"
Jun  1 15:22:23.874: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-1c649fa0-c0e2-4640-8a7f-a7f571d303c5 container client-container: <nil>
STEP: delete the pod
Jun  1 15:22:23.887: INFO: Waiting for pod downwardapi-volume-1c649fa0-c0e2-4640-8a7f-a7f571d303c5 to disappear
Jun  1 15:22:23.891: INFO: Pod downwardapi-volume-1c649fa0-c0e2-4640-8a7f-a7f571d303c5 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Jun  1 15:22:23.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1466" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":104,"skipped":1750,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  test/e2e/framework/framework.go:175
Jun  1 15:22:29.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5212" for this suite.
STEP: Destroying namespace "webhook-5212-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":292,"completed":105,"skipped":1758,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 15:22:29.953: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on node default medium
Jun  1 15:22:30.007: INFO: Waiting up to 5m0s for pod "pod-1f23c79c-bdb1-4c95-865e-2233a8908157" in namespace "emptydir-9596" to be "Succeeded or Failed"
Jun  1 15:22:30.013: INFO: Pod "pod-1f23c79c-bdb1-4c95-865e-2233a8908157": Phase="Pending", Reason="", readiness=false. Elapsed: 6.854061ms
Jun  1 15:22:32.018: INFO: Pod "pod-1f23c79c-bdb1-4c95-865e-2233a8908157": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011230653s
Jun  1 15:22:34.022: INFO: Pod "pod-1f23c79c-bdb1-4c95-865e-2233a8908157": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015890939s
STEP: Saw pod success
Jun  1 15:22:34.023: INFO: Pod "pod-1f23c79c-bdb1-4c95-865e-2233a8908157" satisfied condition "Succeeded or Failed"
Jun  1 15:22:34.026: INFO: Trying to get logs from node kind-worker pod pod-1f23c79c-bdb1-4c95-865e-2233a8908157 container test-container: <nil>
STEP: delete the pod
Jun  1 15:22:34.038: INFO: Waiting for pod pod-1f23c79c-bdb1-4c95-865e-2233a8908157 to disappear
Jun  1 15:22:34.041: INFO: Pod pod-1f23c79c-bdb1-4c95-865e-2233a8908157 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 15:22:34.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9596" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":106,"skipped":1781,"failed":0}

------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-8d6169f7-5d82-4940-90c4-cd8fd0e59941
STEP: Creating a pod to test consume secrets
Jun  1 15:22:34.083: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-41439170-365d-4a74-9113-465a2ba9afff" in namespace "projected-8014" to be "Succeeded or Failed"
Jun  1 15:22:34.086: INFO: Pod "pod-projected-secrets-41439170-365d-4a74-9113-465a2ba9afff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.893151ms
Jun  1 15:22:36.090: INFO: Pod "pod-projected-secrets-41439170-365d-4a74-9113-465a2ba9afff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00625825s
Jun  1 15:22:38.093: INFO: Pod "pod-projected-secrets-41439170-365d-4a74-9113-465a2ba9afff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010101976s
STEP: Saw pod success
Jun  1 15:22:38.094: INFO: Pod "pod-projected-secrets-41439170-365d-4a74-9113-465a2ba9afff" satisfied condition "Succeeded or Failed"
Jun  1 15:22:38.096: INFO: Trying to get logs from node kind-worker pod pod-projected-secrets-41439170-365d-4a74-9113-465a2ba9afff container projected-secret-volume-test: <nil>
STEP: delete the pod
Jun  1 15:22:38.111: INFO: Waiting for pod pod-projected-secrets-41439170-365d-4a74-9113-465a2ba9afff to disappear
Jun  1 15:22:38.114: INFO: Pod pod-projected-secrets-41439170-365d-4a74-9113-465a2ba9afff no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Jun  1 15:22:38.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8014" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":107,"skipped":1781,"failed":0}
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 15:22:38.155: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7cb5e50c-8b30-49cc-b5d2-6530a01159fd" in namespace "downward-api-6527" to be "Succeeded or Failed"
Jun  1 15:22:38.158: INFO: Pod "downwardapi-volume-7cb5e50c-8b30-49cc-b5d2-6530a01159fd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.023556ms
Jun  1 15:22:40.162: INFO: Pod "downwardapi-volume-7cb5e50c-8b30-49cc-b5d2-6530a01159fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006952445s
STEP: Saw pod success
Jun  1 15:22:40.162: INFO: Pod "downwardapi-volume-7cb5e50c-8b30-49cc-b5d2-6530a01159fd" satisfied condition "Succeeded or Failed"
Jun  1 15:22:40.165: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-7cb5e50c-8b30-49cc-b5d2-6530a01159fd container client-container: <nil>
STEP: delete the pod
Jun  1 15:22:40.184: INFO: Waiting for pod downwardapi-volume-7cb5e50c-8b30-49cc-b5d2-6530a01159fd to disappear
Jun  1 15:22:40.188: INFO: Pod downwardapi-volume-7cb5e50c-8b30-49cc-b5d2-6530a01159fd no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Jun  1 15:22:40.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6527" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":292,"completed":108,"skipped":1785,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 60 lines ...
Jun  1 15:22:48.474: INFO: stderr: ""
Jun  1 15:22:48.474: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 15:22:48.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6800" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":292,"completed":109,"skipped":1797,"failed":0}
SS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 26 lines ...
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jun  1 15:22:56.595: INFO: File wheezy_udp@dns-test-service-3.dns-2622.svc.cluster.local from pod  dns-2622/dns-test-6ddd8614-ea1a-4500-a574-ab81dc7eeef1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun  1 15:22:56.599: INFO: File jessie_udp@dns-test-service-3.dns-2622.svc.cluster.local from pod  dns-2622/dns-test-6ddd8614-ea1a-4500-a574-ab81dc7eeef1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun  1 15:22:56.599: INFO: Lookups using dns-2622/dns-test-6ddd8614-ea1a-4500-a574-ab81dc7eeef1 failed for: [wheezy_udp@dns-test-service-3.dns-2622.svc.cluster.local jessie_udp@dns-test-service-3.dns-2622.svc.cluster.local]

Jun  1 15:23:01.603: INFO: File wheezy_udp@dns-test-service-3.dns-2622.svc.cluster.local from pod  dns-2622/dns-test-6ddd8614-ea1a-4500-a574-ab81dc7eeef1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun  1 15:23:01.608: INFO: File jessie_udp@dns-test-service-3.dns-2622.svc.cluster.local from pod  dns-2622/dns-test-6ddd8614-ea1a-4500-a574-ab81dc7eeef1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun  1 15:23:01.608: INFO: Lookups using dns-2622/dns-test-6ddd8614-ea1a-4500-a574-ab81dc7eeef1 failed for: [wheezy_udp@dns-test-service-3.dns-2622.svc.cluster.local jessie_udp@dns-test-service-3.dns-2622.svc.cluster.local]

Jun  1 15:23:06.603: INFO: File wheezy_udp@dns-test-service-3.dns-2622.svc.cluster.local from pod  dns-2622/dns-test-6ddd8614-ea1a-4500-a574-ab81dc7eeef1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun  1 15:23:06.607: INFO: File jessie_udp@dns-test-service-3.dns-2622.svc.cluster.local from pod  dns-2622/dns-test-6ddd8614-ea1a-4500-a574-ab81dc7eeef1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun  1 15:23:06.607: INFO: Lookups using dns-2622/dns-test-6ddd8614-ea1a-4500-a574-ab81dc7eeef1 failed for: [wheezy_udp@dns-test-service-3.dns-2622.svc.cluster.local jessie_udp@dns-test-service-3.dns-2622.svc.cluster.local]

Jun  1 15:23:11.604: INFO: File wheezy_udp@dns-test-service-3.dns-2622.svc.cluster.local from pod  dns-2622/dns-test-6ddd8614-ea1a-4500-a574-ab81dc7eeef1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun  1 15:23:11.608: INFO: File jessie_udp@dns-test-service-3.dns-2622.svc.cluster.local from pod  dns-2622/dns-test-6ddd8614-ea1a-4500-a574-ab81dc7eeef1 contains '' instead of 'bar.example.com.'
Jun  1 15:23:11.608: INFO: Lookups using dns-2622/dns-test-6ddd8614-ea1a-4500-a574-ab81dc7eeef1 failed for: [wheezy_udp@dns-test-service-3.dns-2622.svc.cluster.local jessie_udp@dns-test-service-3.dns-2622.svc.cluster.local]

Jun  1 15:23:16.603: INFO: File wheezy_udp@dns-test-service-3.dns-2622.svc.cluster.local from pod  dns-2622/dns-test-6ddd8614-ea1a-4500-a574-ab81dc7eeef1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun  1 15:23:16.606: INFO: File jessie_udp@dns-test-service-3.dns-2622.svc.cluster.local from pod  dns-2622/dns-test-6ddd8614-ea1a-4500-a574-ab81dc7eeef1 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun  1 15:23:16.606: INFO: Lookups using dns-2622/dns-test-6ddd8614-ea1a-4500-a574-ab81dc7eeef1 failed for: [wheezy_udp@dns-test-service-3.dns-2622.svc.cluster.local jessie_udp@dns-test-service-3.dns-2622.svc.cluster.local]

Jun  1 15:23:21.607: INFO: DNS probes using dns-test-6ddd8614-ea1a-4500-a574-ab81dc7eeef1 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2622.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-2622.svc.cluster.local; sleep 1; done
... skipping 9 lines ...
STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Jun  1 15:23:25.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2622" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":292,"completed":110,"skipped":1799,"failed":0}
SSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 14 lines ...
STEP: verifying the updated pod is in kubernetes
Jun  1 15:23:30.347: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Jun  1 15:23:30.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3990" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":292,"completed":111,"skipped":1804,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 15:23:30.397: INFO: Waiting up to 5m0s for pod "downwardapi-volume-811b6fa1-84ea-49bf-a65f-de220fb9c4c3" in namespace "projected-5446" to be "Succeeded or Failed"
Jun  1 15:23:30.400: INFO: Pod "downwardapi-volume-811b6fa1-84ea-49bf-a65f-de220fb9c4c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.763618ms
Jun  1 15:23:32.407: INFO: Pod "downwardapi-volume-811b6fa1-84ea-49bf-a65f-de220fb9c4c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009145952s
STEP: Saw pod success
Jun  1 15:23:32.407: INFO: Pod "downwardapi-volume-811b6fa1-84ea-49bf-a65f-de220fb9c4c3" satisfied condition "Succeeded or Failed"
Jun  1 15:23:32.410: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-811b6fa1-84ea-49bf-a65f-de220fb9c4c3 container client-container: <nil>
STEP: delete the pod
Jun  1 15:23:32.428: INFO: Waiting for pod downwardapi-volume-811b6fa1-84ea-49bf-a65f-de220fb9c4c3 to disappear
Jun  1 15:23:32.431: INFO: Pod downwardapi-volume-811b6fa1-84ea-49bf-a65f-de220fb9c4c3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Jun  1 15:23:32.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5446" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":292,"completed":112,"skipped":1811,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-6562c93b-3ad9-4fb6-9cec-eb09995ab0d6
STEP: Creating a pod to test consume configMaps
Jun  1 15:23:32.488: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-76715a80-98c1-49f9-a985-442a19b699ce" in namespace "projected-1217" to be "Succeeded or Failed"
Jun  1 15:23:32.494: INFO: Pod "pod-projected-configmaps-76715a80-98c1-49f9-a985-442a19b699ce": Phase="Pending", Reason="", readiness=false. Elapsed: 6.17802ms
Jun  1 15:23:34.498: INFO: Pod "pod-projected-configmaps-76715a80-98c1-49f9-a985-442a19b699ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010475205s
STEP: Saw pod success
Jun  1 15:23:34.498: INFO: Pod "pod-projected-configmaps-76715a80-98c1-49f9-a985-442a19b699ce" satisfied condition "Succeeded or Failed"
Jun  1 15:23:34.502: INFO: Trying to get logs from node kind-worker pod pod-projected-configmaps-76715a80-98c1-49f9-a985-442a19b699ce container projected-configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 15:23:34.525: INFO: Waiting for pod pod-projected-configmaps-76715a80-98c1-49f9-a985-442a19b699ce to disappear
Jun  1 15:23:34.529: INFO: Pod pod-projected-configmaps-76715a80-98c1-49f9-a985-442a19b699ce no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Jun  1 15:23:34.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1217" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":113,"skipped":1827,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-fbbe333a-411b-4b23-a1f8-8d9c24ec72b1
STEP: Creating a pod to test consume secrets
Jun  1 15:23:34.620: INFO: Waiting up to 5m0s for pod "pod-secrets-29c90171-33d8-4113-9636-dae4cdf707e8" in namespace "secrets-6347" to be "Succeeded or Failed"
Jun  1 15:23:34.626: INFO: Pod "pod-secrets-29c90171-33d8-4113-9636-dae4cdf707e8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.628347ms
Jun  1 15:23:36.630: INFO: Pod "pod-secrets-29c90171-33d8-4113-9636-dae4cdf707e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010507927s
Jun  1 15:23:38.635: INFO: Pod "pod-secrets-29c90171-33d8-4113-9636-dae4cdf707e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015259447s
STEP: Saw pod success
Jun  1 15:23:38.635: INFO: Pod "pod-secrets-29c90171-33d8-4113-9636-dae4cdf707e8" satisfied condition "Succeeded or Failed"
Jun  1 15:23:38.638: INFO: Trying to get logs from node kind-worker pod pod-secrets-29c90171-33d8-4113-9636-dae4cdf707e8 container secret-volume-test: <nil>
STEP: delete the pod
Jun  1 15:23:38.651: INFO: Waiting for pod pod-secrets-29c90171-33d8-4113-9636-dae4cdf707e8 to disappear
Jun  1 15:23:38.654: INFO: Pod pod-secrets-29c90171-33d8-4113-9636-dae4cdf707e8 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Jun  1 15:23:38.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6347" for this suite.
STEP: Destroying namespace "secret-namespace-1012" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":292,"completed":114,"skipped":1850,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Jun  1 15:23:38.698: INFO: Waiting up to 5m0s for pod "busybox-user-65534-6b8e10f3-1538-4319-828d-50d3f51e97c6" in namespace "security-context-test-7227" to be "Succeeded or Failed"
Jun  1 15:23:38.700: INFO: Pod "busybox-user-65534-6b8e10f3-1538-4319-828d-50d3f51e97c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103601ms
Jun  1 15:23:40.705: INFO: Pod "busybox-user-65534-6b8e10f3-1538-4319-828d-50d3f51e97c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007476796s
Jun  1 15:23:40.705: INFO: Pod "busybox-user-65534-6b8e10f3-1538-4319-828d-50d3f51e97c6" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Jun  1 15:23:40.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7227" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":115,"skipped":1909,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 19 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Jun  1 15:23:56.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6683" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":292,"completed":116,"skipped":1922,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 8 lines ...
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Jun  1 15:24:03.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3846" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":292,"completed":117,"skipped":2010,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 27 lines ...
Jun  1 15:24:10.971: INFO: Pod "test-rolling-update-deployment-df7bb669b-mxm5j" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-df7bb669b-mxm5j test-rolling-update-deployment-df7bb669b- deployment-5532 /api/v1/namespaces/deployment-5532/pods/test-rolling-update-deployment-df7bb669b-mxm5j 80664313-0ebc-4bb2-ac5d-6540bba0d2bd 15789 0 2020-06-01 15:24:08 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:df7bb669b] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-df7bb669b d9d0ae38-442c-41f2-8872-b8451713a25c 0xc00341c640 0xc00341c641}] []  [{kube-controller-manager Update v1 2020-06-01 15:24:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9d0ae38-442c-41f2-8872-b8451713a25c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-06-01 15:24:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.138\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tb2f2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tb2f2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tb2f2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 15:24:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 15:24:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 15:24:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-01 15:24:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.2.138,StartTime:2020-06-01 15:24:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-01 15:24:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://b093dc90535e1a02c11f286a388782d6f3d2d3186669257fa4d1199cc895d0bb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.138,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Jun  1 15:24:10.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5532" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":292,"completed":118,"skipped":2020,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
Jun  1 15:24:11.024: INFO: >>> kubeConfig: /root/.kube/kind-test-config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 15:24:17.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8556" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":292,"completed":119,"skipped":2041,"failed":0}
SSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Job
... skipping 12 lines ...
Jun  1 15:24:21.411: INFO: Terminating Job.batch foo pods took: 100.250982ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:175
Jun  1 15:24:56.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-799" for this suite.
•{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":292,"completed":120,"skipped":2044,"failed":0}
S
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 17 lines ...
Jun  1 15:26:32.679: INFO: Restart count of pod container-probe-3515/liveness-6c121e02-d51a-4b23-b3ad-e4b80bc7f1c7 is now 5 (1m32.208386677s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Jun  1 15:26:32.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3515" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":292,"completed":121,"skipped":2045,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
Jun  1 15:26:36.747: INFO: Initial restart count of pod busybox-3ebb21ca-e44f-4d5b-b310-460246ab07fc is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Jun  1 15:30:37.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5115" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":292,"completed":122,"skipped":2052,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 3 lines ...
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  test/e2e/common/pods.go:179
[It] should contain environment variables for services [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Jun  1 15:30:41.392: INFO: Waiting up to 5m0s for pod "client-envvars-139ccbf9-172f-4577-85ae-25b12fefff17" in namespace "pods-2699" to be "Succeeded or Failed"
Jun  1 15:30:41.401: INFO: Pod "client-envvars-139ccbf9-172f-4577-85ae-25b12fefff17": Phase="Pending", Reason="", readiness=false. Elapsed: 8.752085ms
Jun  1 15:30:43.405: INFO: Pod "client-envvars-139ccbf9-172f-4577-85ae-25b12fefff17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01253028s
Jun  1 15:30:45.410: INFO: Pod "client-envvars-139ccbf9-172f-4577-85ae-25b12fefff17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017029887s
STEP: Saw pod success
Jun  1 15:30:45.410: INFO: Pod "client-envvars-139ccbf9-172f-4577-85ae-25b12fefff17" satisfied condition "Succeeded or Failed"
Jun  1 15:30:45.413: INFO: Trying to get logs from node kind-worker pod client-envvars-139ccbf9-172f-4577-85ae-25b12fefff17 container env3cont: <nil>
STEP: delete the pod
Jun  1 15:30:45.441: INFO: Waiting for pod client-envvars-139ccbf9-172f-4577-85ae-25b12fefff17 to disappear
Jun  1 15:30:45.444: INFO: Pod client-envvars-139ccbf9-172f-4577-85ae-25b12fefff17 no longer exists
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Jun  1 15:30:45.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2699" for this suite.
•{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":292,"completed":123,"skipped":2081,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 15:30:45.451: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir volume type on tmpfs
Jun  1 15:30:45.483: INFO: Waiting up to 5m0s for pod "pod-0362fcbd-83c2-467d-8ec0-b47b3d3306ec" in namespace "emptydir-1858" to be "Succeeded or Failed"
Jun  1 15:30:45.486: INFO: Pod "pod-0362fcbd-83c2-467d-8ec0-b47b3d3306ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.831871ms
Jun  1 15:30:47.490: INFO: Pod "pod-0362fcbd-83c2-467d-8ec0-b47b3d3306ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00718637s
Jun  1 15:30:49.495: INFO: Pod "pod-0362fcbd-83c2-467d-8ec0-b47b3d3306ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011693129s
STEP: Saw pod success
Jun  1 15:30:49.495: INFO: Pod "pod-0362fcbd-83c2-467d-8ec0-b47b3d3306ec" satisfied condition "Succeeded or Failed"
Jun  1 15:30:49.498: INFO: Trying to get logs from node kind-worker pod pod-0362fcbd-83c2-467d-8ec0-b47b3d3306ec container test-container: <nil>
STEP: delete the pod
Jun  1 15:30:49.525: INFO: Waiting for pod pod-0362fcbd-83c2-467d-8ec0-b47b3d3306ec to disappear
Jun  1 15:30:49.528: INFO: Pod pod-0362fcbd-83c2-467d-8ec0-b47b3d3306ec no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 15:30:49.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1858" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":124,"skipped":2082,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 15:30:53.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2790" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":292,"completed":125,"skipped":2117,"failed":0}
S
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
Jun  1 15:30:57.707: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:30:57.711: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:30:57.734: INFO: Unable to read jessie_udp@dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:30:57.737: INFO: Unable to read jessie_tcp@dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:30:57.740: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:30:57.743: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:30:57.763: INFO: Lookups using dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41 failed for: [wheezy_udp@dns-test-service.dns-7631.svc.cluster.local wheezy_tcp@dns-test-service.dns-7631.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local jessie_udp@dns-test-service.dns-7631.svc.cluster.local jessie_tcp@dns-test-service.dns-7631.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local]

Jun  1 15:31:02.767: INFO: Unable to read wheezy_udp@dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:02.771: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:02.774: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:02.779: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:02.801: INFO: Unable to read jessie_udp@dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:02.803: INFO: Unable to read jessie_tcp@dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:02.807: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:02.810: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:02.834: INFO: Lookups using dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41 failed for: [wheezy_udp@dns-test-service.dns-7631.svc.cluster.local wheezy_tcp@dns-test-service.dns-7631.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local jessie_udp@dns-test-service.dns-7631.svc.cluster.local jessie_tcp@dns-test-service.dns-7631.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local]

Jun  1 15:31:07.767: INFO: Unable to read wheezy_udp@dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:07.772: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:07.775: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:07.779: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:07.802: INFO: Unable to read jessie_udp@dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:07.806: INFO: Unable to read jessie_tcp@dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:07.808: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:07.811: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:07.834: INFO: Lookups using dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41 failed for: [wheezy_udp@dns-test-service.dns-7631.svc.cluster.local wheezy_tcp@dns-test-service.dns-7631.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local jessie_udp@dns-test-service.dns-7631.svc.cluster.local jessie_tcp@dns-test-service.dns-7631.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local]

Jun  1 15:31:12.768: INFO: Unable to read wheezy_udp@dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:12.774: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:12.777: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:12.780: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:12.801: INFO: Unable to read jessie_udp@dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:12.804: INFO: Unable to read jessie_tcp@dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:12.807: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:12.810: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:12.833: INFO: Lookups using dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41 failed for: [wheezy_udp@dns-test-service.dns-7631.svc.cluster.local wheezy_tcp@dns-test-service.dns-7631.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local jessie_udp@dns-test-service.dns-7631.svc.cluster.local jessie_tcp@dns-test-service.dns-7631.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local]

Jun  1 15:31:17.767: INFO: Unable to read wheezy_udp@dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:17.771: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:17.775: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:17.779: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:17.801: INFO: Unable to read jessie_udp@dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:17.803: INFO: Unable to read jessie_tcp@dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:17.807: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:17.809: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:17.840: INFO: Lookups using dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41 failed for: [wheezy_udp@dns-test-service.dns-7631.svc.cluster.local wheezy_tcp@dns-test-service.dns-7631.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local jessie_udp@dns-test-service.dns-7631.svc.cluster.local jessie_tcp@dns-test-service.dns-7631.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local]

Jun  1 15:31:22.768: INFO: Unable to read wheezy_udp@dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:22.772: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:22.775: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:22.778: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:22.798: INFO: Unable to read jessie_udp@dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:22.802: INFO: Unable to read jessie_tcp@dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:22.805: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:22.808: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local from pod dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41: the server could not find the requested resource (get pods dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41)
Jun  1 15:31:22.825: INFO: Lookups using dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41 failed for: [wheezy_udp@dns-test-service.dns-7631.svc.cluster.local wheezy_tcp@dns-test-service.dns-7631.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local jessie_udp@dns-test-service.dns-7631.svc.cluster.local jessie_tcp@dns-test-service.dns-7631.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7631.svc.cluster.local]

Jun  1 15:31:27.828: INFO: DNS probes using dns-7631/dns-test-e92e7c8b-b8af-46ae-aaa8-5f3662409c41 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Jun  1 15:31:27.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7631" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":292,"completed":126,"skipped":2118,"failed":0}
SSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 5 lines ...
[BeforeEach] [sig-network] Services
  test/e2e/network/service.go:809
[It] should serve multiport endpoints from pods  [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating service multi-endpoint-test in namespace services-1703
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1703 to expose endpoints map[]
Jun  1 15:31:28.034: INFO: Get endpoints failed (3.873072ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jun  1 15:31:29.037: INFO: successfully validated that service multi-endpoint-test in namespace services-1703 exposes endpoints map[] (1.007293578s elapsed)
STEP: Creating pod pod1 in namespace services-1703
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1703 to expose endpoints map[pod1:[100]]
Jun  1 15:31:31.072: INFO: successfully validated that service multi-endpoint-test in namespace services-1703 exposes endpoints map[pod1:[100]] (2.028406521s elapsed)
STEP: Creating pod pod2 in namespace services-1703
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1703 to expose endpoints map[pod1:[100] pod2:[101]]
... skipping 7 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jun  1 15:31:35.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1703" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:813
•{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":292,"completed":127,"skipped":2124,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 11 lines ...
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Jun  1 15:31:35.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2958" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":292,"completed":128,"skipped":2173,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 15:31:35.293: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on node default medium
Jun  1 15:31:35.338: INFO: Waiting up to 5m0s for pod "pod-b87dbef1-7cc1-4ac5-be1f-b5d1d1452bfd" in namespace "emptydir-3826" to be "Succeeded or Failed"
Jun  1 15:31:35.342: INFO: Pod "pod-b87dbef1-7cc1-4ac5-be1f-b5d1d1452bfd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.472351ms
Jun  1 15:31:37.346: INFO: Pod "pod-b87dbef1-7cc1-4ac5-be1f-b5d1d1452bfd": Phase="Running", Reason="", readiness=true. Elapsed: 2.007412864s
Jun  1 15:31:39.350: INFO: Pod "pod-b87dbef1-7cc1-4ac5-be1f-b5d1d1452bfd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012011368s
STEP: Saw pod success
Jun  1 15:31:39.350: INFO: Pod "pod-b87dbef1-7cc1-4ac5-be1f-b5d1d1452bfd" satisfied condition "Succeeded or Failed"
Jun  1 15:31:39.355: INFO: Trying to get logs from node kind-worker pod pod-b87dbef1-7cc1-4ac5-be1f-b5d1d1452bfd container test-container: <nil>
STEP: delete the pod
Jun  1 15:31:39.373: INFO: Waiting for pod pod-b87dbef1-7cc1-4ac5-be1f-b5d1d1452bfd to disappear
Jun  1 15:31:39.376: INFO: Pod pod-b87dbef1-7cc1-4ac5-be1f-b5d1d1452bfd no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 15:31:39.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3826" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":129,"skipped":2186,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 22 lines ...
Jun  1 15:31:43.208: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jun  1 15:31:43.208: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:38093 --kubeconfig=/root/.kube/kind-test-config describe pod agnhost-master-t2g6f --namespace=kubectl-2962'
Jun  1 15:31:43.495: INFO: stderr: ""
Jun  1 15:31:43.495: INFO: stdout: "Name:         agnhost-master-t2g6f\nNamespace:    kubectl-2962\nPriority:     0\nNode:         kind-worker/172.18.0.3\nStart Time:   Mon, 01 Jun 2020 15:31:40 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  <none>\nStatus:       Running\nIP:           10.244.2.150\nIPs:\n  IP:           10.244.2.150\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://3c1cb1ea7d300f7223585cc6b301ab4ed808d5418e0506a45a167b87fbad02a8\n    Image:          us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n    Image ID:       us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Mon, 01 Jun 2020 15:31:42 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-srtvh (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-srtvh:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-srtvh\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  <none>\nTolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From                  Message\n  ----    ------     ----  ----                  -------\n  Normal  Scheduled  3s    default-scheduler     Successfully assigned kubectl-2962/agnhost-master-t2g6f to kind-worker\n  Normal  Pulled     2s    kubelet, kind-worker  Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\" already present on machine\n  Normal  Created    2s    kubelet, kind-worker  Created container agnhost-master\n  Normal  Started    1s    kubelet, kind-worker  Started container agnhost-master\n"
Jun  1 15:31:43.495: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:38093 --kubeconfig=/root/.kube/kind-test-config describe rc agnhost-master --namespace=kubectl-2962'
Jun  1 15:31:43.786: INFO: stderr: ""
Jun  1 15:31:43.786: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-2962\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  3s    replication-controller  Created pod: agnhost-master-t2g6f\n"
Jun  1 15:31:43.786: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:38093 --kubeconfig=/root/.kube/kind-test-config describe service agnhost-master --namespace=kubectl-2962'
Jun  1 15:31:44.059: INFO: stderr: ""
Jun  1 15:31:44.059: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-2962\nLabels:            app=agnhost\n                   role=master\nAnnotations:       <none>\nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.101.171.150\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.244.2.150:6379\nSession Affinity:  None\nEvents:            <none>\n"
Jun  1 15:31:44.064: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:38093 --kubeconfig=/root/.kube/kind-test-config describe node kind-control-plane'
Jun  1 15:31:44.337: INFO: stderr: ""
Jun  1 15:31:44.337: INFO: stdout: "Name:               kind-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=kind-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Mon, 01 Jun 2020 14:43:34 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  kind-control-plane\n  AcquireTime:     <unset>\n  RenewTime:       Mon, 01 Jun 2020 15:31:34 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Mon, 01 Jun 2020 15:29:14 +0000   Mon, 01 Jun 2020 14:43:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Mon, 01 Jun 2020 15:29:14 +0000   Mon, 01 Jun 2020 14:43:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Mon, 01 Jun 2020 15:29:14 +0000   Mon, 01 Jun 2020 14:43:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Mon, 01 Jun 2020 15:29:14 +0000   Mon, 01 Jun 2020 14:44:13 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.2\n  Hostname:    kind-control-plane\nCapacity:\n  cpu:                8\n  ephemeral-storage:  253882800Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             53582972Ki\n  pods:               110\nAllocatable:\n  cpu:                8\n  ephemeral-storage:  253882800Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             53582972Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 6c48bc6a03234b17a64495f6b9653203\n  System UUID:                db9ebb10-c0ff-4b7e-8272-0cf8ef3a9745\n  Boot ID:                    411cbe6c-e8d3-4714-b57d-b5c04b8ab3f4\n  Kernel Version:             4.15.0-1044-gke\n  OS Image:                   Ubuntu 20.04 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.3.4-12-g1e902b2d\n  Kubelet Version:            v1.19.0-beta.0.318+b618411f1edb98\n  Kube-Proxy Version:         v1.19.0-beta.0.318+b618411f1edb98\nPodCIDR:                      10.244.0.0/24\nPodCIDRs:                     10.244.0.0/24\nNon-terminated Pods:          (7 in total)\n  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---\n  kube-system                 coredns-66bff467f8-4f7ss                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     47m\n  kube-system                 etcd-kind-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         48m\n  kube-system                 kindnet-tc2cj                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      47m\n  kube-system                 kube-apiserver-kind-control-plane             250m (3%)     0 (0%)      0 (0%)           0 (0%)         48m\n  kube-system                 kube-controller-manager-kind-control-plane    200m (2%)     0 (0%)      0 (0%)           0 (0%)         48m\n  kube-system                 kube-proxy-4vkdm                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         47m\n  kube-system                 kube-scheduler-kind-control-plane             100m (1%)     0 (0%)      0 (0%)           0 (0%)         48m\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                750m (9%)   100m (1%)\n  memory             120Mi (0%)  220Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\n  hugepages-1Gi      0 (0%)      0 (0%)\n  hugepages-2Mi      0 (0%)      0 (0%)\nEvents:\n  Type     Reason                    Age                From                            Message\n  ----     ------                    ----               ----                            -------\n  Normal   NodeHasSufficientMemory   48m (x5 over 48m)  kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasSufficientMemory\n  Normal   NodeHasNoDiskPressure     48m (x5 over 48m)  kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID      48m (x4 over 48m)  kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasSufficientPID\n  Normal   Starting                  48m                kubelet, kind-control-plane     Starting kubelet.\n  Warning  CheckLimitsForResolvConf  48m                kubelet, kind-control-plane     Resolv.conf file '/etc/resolv.conf' contains search line consisting of more than 3 domains!\n  Normal   NodeHasSufficientMemory   48m                kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasSufficientMemory\n  Normal   NodeHasNoDiskPressure     48m                kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID      48m                kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasSufficientPID\n  Normal   NodeAllocatableEnforced   48m                kubelet, kind-control-plane     Updated Node Allocatable limit across pods\n  Normal   Starting                  47m                kube-proxy, kind-control-plane  Starting kube-proxy.\n  Normal   NodeReady                 47m                kubelet, kind-control-plane     Node kind-control-plane status is now: NodeReady\n"
Jun  1 15:31:44.337: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:38093 --kubeconfig=/root/.kube/kind-test-config describe namespace kubectl-2962'
Jun  1 15:31:44.590: INFO: stderr: ""
Jun  1 15:31:44.590: INFO: stdout: "Name:         kubectl-2962\nLabels:       e2e-framework=kubectl\n              e2e-run=c5ddb8ac-f47a-4b76-8a94-ef9573d51cfb\nAnnotations:  <none>\nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 15:31:44.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2962" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":292,"completed":130,"skipped":2211,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 15:31:44.598: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on node default medium
Jun  1 15:31:44.631: INFO: Waiting up to 5m0s for pod "pod-32db4adc-f444-49cb-a461-807476dcd495" in namespace "emptydir-2243" to be "Succeeded or Failed"
Jun  1 15:31:44.638: INFO: Pod "pod-32db4adc-f444-49cb-a461-807476dcd495": Phase="Pending", Reason="", readiness=false. Elapsed: 7.607415ms
Jun  1 15:31:46.643: INFO: Pod "pod-32db4adc-f444-49cb-a461-807476dcd495": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012501004s
Jun  1 15:31:48.647: INFO: Pod "pod-32db4adc-f444-49cb-a461-807476dcd495": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016054123s
STEP: Saw pod success
Jun  1 15:31:48.647: INFO: Pod "pod-32db4adc-f444-49cb-a461-807476dcd495" satisfied condition "Succeeded or Failed"
Jun  1 15:31:48.650: INFO: Trying to get logs from node kind-worker2 pod pod-32db4adc-f444-49cb-a461-807476dcd495 container test-container: <nil>
STEP: delete the pod
Jun  1 15:31:48.675: INFO: Waiting for pod pod-32db4adc-f444-49cb-a461-807476dcd495 to disappear
Jun  1 15:31:48.678: INFO: Pod pod-32db4adc-f444-49cb-a461-807476dcd495 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 15:31:48.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2243" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":131,"skipped":2214,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 15:31:48.720: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0f4226d1-3b25-4751-ad05-2ecc8dfdb983" in namespace "downward-api-7089" to be "Succeeded or Failed"
Jun  1 15:31:48.723: INFO: Pod "downwardapi-volume-0f4226d1-3b25-4751-ad05-2ecc8dfdb983": Phase="Pending", Reason="", readiness=false. Elapsed: 3.38813ms
Jun  1 15:31:50.727: INFO: Pod "downwardapi-volume-0f4226d1-3b25-4751-ad05-2ecc8dfdb983": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007468871s
Jun  1 15:31:52.734: INFO: Pod "downwardapi-volume-0f4226d1-3b25-4751-ad05-2ecc8dfdb983": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014336629s
STEP: Saw pod success
Jun  1 15:31:52.734: INFO: Pod "downwardapi-volume-0f4226d1-3b25-4751-ad05-2ecc8dfdb983" satisfied condition "Succeeded or Failed"
Jun  1 15:31:52.738: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-0f4226d1-3b25-4751-ad05-2ecc8dfdb983 container client-container: <nil>
STEP: delete the pod
Jun  1 15:31:52.758: INFO: Waiting for pod downwardapi-volume-0f4226d1-3b25-4751-ad05-2ecc8dfdb983 to disappear
Jun  1 15:31:52.760: INFO: Pod downwardapi-volume-0f4226d1-3b25-4751-ad05-2ecc8dfdb983 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Jun  1 15:31:52.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7089" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":292,"completed":132,"skipped":2222,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-c34af4de-5765-4f24-8fc0-4bee4918fd22
STEP: Creating a pod to test consume configMaps
Jun  1 15:31:52.842: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a226693f-fdd9-41b0-abf9-1aa82fee6bf4" in namespace "projected-8688" to be "Succeeded or Failed"
Jun  1 15:31:52.845: INFO: Pod "pod-projected-configmaps-a226693f-fdd9-41b0-abf9-1aa82fee6bf4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.345834ms
Jun  1 15:31:54.850: INFO: Pod "pod-projected-configmaps-a226693f-fdd9-41b0-abf9-1aa82fee6bf4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00805002s
Jun  1 15:31:56.854: INFO: Pod "pod-projected-configmaps-a226693f-fdd9-41b0-abf9-1aa82fee6bf4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012700898s
STEP: Saw pod success
Jun  1 15:31:56.855: INFO: Pod "pod-projected-configmaps-a226693f-fdd9-41b0-abf9-1aa82fee6bf4" satisfied condition "Succeeded or Failed"
Jun  1 15:31:56.858: INFO: Trying to get logs from node kind-worker pod pod-projected-configmaps-a226693f-fdd9-41b0-abf9-1aa82fee6bf4 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 15:31:56.872: INFO: Waiting for pod pod-projected-configmaps-a226693f-fdd9-41b0-abf9-1aa82fee6bf4 to disappear
Jun  1 15:31:56.875: INFO: Pod pod-projected-configmaps-a226693f-fdd9-41b0-abf9-1aa82fee6bf4 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Jun  1 15:31:56.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8688" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":292,"completed":133,"skipped":2247,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-95a4ade0-32b9-4bc0-b47f-0385250bcfe2
STEP: Creating a pod to test consume secrets
Jun  1 15:31:56.922: INFO: Waiting up to 5m0s for pod "pod-secrets-238415a7-7d94-4905-b350-06724fd9dc81" in namespace "secrets-2780" to be "Succeeded or Failed"
Jun  1 15:31:56.926: INFO: Pod "pod-secrets-238415a7-7d94-4905-b350-06724fd9dc81": Phase="Pending", Reason="", readiness=false. Elapsed: 3.969261ms
Jun  1 15:31:58.930: INFO: Pod "pod-secrets-238415a7-7d94-4905-b350-06724fd9dc81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008475351s
Jun  1 15:32:00.934: INFO: Pod "pod-secrets-238415a7-7d94-4905-b350-06724fd9dc81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012727604s
STEP: Saw pod success
Jun  1 15:32:00.934: INFO: Pod "pod-secrets-238415a7-7d94-4905-b350-06724fd9dc81" satisfied condition "Succeeded or Failed"
Jun  1 15:32:00.938: INFO: Trying to get logs from node kind-worker pod pod-secrets-238415a7-7d94-4905-b350-06724fd9dc81 container secret-env-test: <nil>
STEP: delete the pod
Jun  1 15:32:00.951: INFO: Waiting for pod pod-secrets-238415a7-7d94-4905-b350-06724fd9dc81 to disappear
Jun  1 15:32:00.954: INFO: Pod pod-secrets-238415a7-7d94-4905-b350-06724fd9dc81 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
Jun  1 15:32:00.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2780" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":292,"completed":134,"skipped":2271,"failed":0}

------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
Jun  1 15:32:03.018: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Jun  1 15:32:03.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3601" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":292,"completed":135,"skipped":2271,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 10 lines ...
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 15:32:18.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1986" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":292,"completed":136,"skipped":2276,"failed":0}

------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 15:32:19.025: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4ec40c82-5564-4bd4-b74c-55954d306e23" in namespace "downward-api-5536" to be "Succeeded or Failed"
Jun  1 15:32:19.033: INFO: Pod "downwardapi-volume-4ec40c82-5564-4bd4-b74c-55954d306e23": Phase="Pending", Reason="", readiness=false. Elapsed: 7.595702ms
Jun  1 15:32:21.036: INFO: Pod "downwardapi-volume-4ec40c82-5564-4bd4-b74c-55954d306e23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010444771s
Jun  1 15:32:23.040: INFO: Pod "downwardapi-volume-4ec40c82-5564-4bd4-b74c-55954d306e23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014302023s
STEP: Saw pod success
Jun  1 15:32:23.040: INFO: Pod "downwardapi-volume-4ec40c82-5564-4bd4-b74c-55954d306e23" satisfied condition "Succeeded or Failed"
Jun  1 15:32:23.043: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-4ec40c82-5564-4bd4-b74c-55954d306e23 container client-container: <nil>
STEP: delete the pod
Jun  1 15:32:23.058: INFO: Waiting for pod downwardapi-volume-4ec40c82-5564-4bd4-b74c-55954d306e23 to disappear
Jun  1 15:32:23.060: INFO: Pod downwardapi-volume-4ec40c82-5564-4bd4-b74c-55954d306e23 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Jun  1 15:32:23.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5536" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":292,"completed":137,"skipped":2276,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  1 15:32:23.067: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename webhook
... skipping 6 lines ...
STEP: Wait for the deployment to be ready
Jun  1 15:32:23.623: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created
Jun  1 15:32:25.635: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726622343, loc:(*time.Location)(0x8006d20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726622343, loc:(*time.Location)(0x8006d20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726622343, loc:(*time.Location)(0x8006d20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726622343, loc:(*time.Location)(0x8006d20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jun  1 15:32:28.647: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  test/e2e/framework/framework.go:597
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jun  1 15:32:28.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-574" for this suite.
STEP: Destroying namespace "webhook-574-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":292,"completed":138,"skipped":2279,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Jun  1 15:32:44.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3378" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":292,"completed":139,"skipped":2285,"failed":0}
SSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 36 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
Jun  1 15:33:01.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-586" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":292,"completed":140,"skipped":2290,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 12 lines ...
STEP: reading a file in the container
Jun  1 15:33:04.374: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl exec --namespace=svcaccounts-8517 pod-service-account-69c55d1c-3b00-4a01-8af4-87a85aa62a83 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  test/e2e/framework/framework.go:175
Jun  1 15:33:04.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-8517" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":292,"completed":141,"skipped":2315,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Aggregator
... skipping 20 lines ...
[AfterEach] [sig-api-machinery] Aggregator
  test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  test/e2e/framework/framework.go:175
Jun  1 15:33:27.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-2402" for this suite.
•{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":292,"completed":142,"skipped":2329,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Jun  1 15:33:27.595: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on node default medium
Jun  1 15:33:27.653: INFO: Waiting up to 5m0s for pod "pod-3cb6e4ad-9d34-4dbf-9167-e322d5b33994" in namespace "emptydir-3462" to be "Succeeded or Failed"
Jun  1 15:33:27.656: INFO: Pod "pod-3cb6e4ad-9d34-4dbf-9167-e322d5b33994": Phase="Pending", Reason="", readiness=false. Elapsed: 2.588032ms
Jun  1 15:33:29.660: INFO: Pod "pod-3cb6e4ad-9d34-4dbf-9167-e322d5b33994": Phase="Running", Reason="", readiness=true. Elapsed: 2.006776172s
Jun  1 15:33:31.666: INFO: Pod "pod-3cb6e4ad-9d34-4dbf-9167-e322d5b33994": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012365559s
STEP: Saw pod success
Jun  1 15:33:31.666: INFO: Pod "pod-3cb6e4ad-9d34-4dbf-9167-e322d5b33994" satisfied condition "Succeeded or Failed"
Jun  1 15:33:31.668: INFO: Trying to get logs from node kind-worker pod pod-3cb6e4ad-9d34-4dbf-9167-e322d5b33994 container test-container: <nil>
STEP: delete the pod
Jun  1 15:33:31.684: INFO: Waiting for pod pod-3cb6e4ad-9d34-4dbf-9167-e322d5b33994 to disappear
Jun  1 15:33:31.687: INFO: Pod pod-3cb6e4ad-9d34-4dbf-9167-e322d5b33994 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Jun  1 15:33:31.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3462" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":143,"skipped":2331,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 10 lines ...
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Jun  1 15:33:35.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9434" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":292,"completed":144,"skipped":2357,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
Jun  1 15:33:36.030: INFO: stderr: ""
Jun  1 15:33:36.030: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 15:33:36.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-545" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":292,"completed":145,"skipped":2361,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-projected-766r
STEP: Creating a pod to test atomic-volume-subpath
Jun  1 15:33:36.080: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-766r" in namespace "subpath-2507" to be "Succeeded or Failed"
Jun  1 15:33:36.083: INFO: Pod "pod-subpath-test-projected-766r": Phase="Pending", Reason="", readiness=false. Elapsed: 3.591463ms
Jun  1 15:33:38.087: INFO: Pod "pod-subpath-test-projected-766r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007011428s
Jun  1 15:33:40.090: INFO: Pod "pod-subpath-test-projected-766r": Phase="Running", Reason="", readiness=true. Elapsed: 4.010413941s
Jun  1 15:33:42.094: INFO: Pod "pod-subpath-test-projected-766r": Phase="Running", Reason="", readiness=true. Elapsed: 6.01459679s
Jun  1 15:33:44.100: INFO: Pod "pod-subpath-test-projected-766r": Phase="Running", Reason="", readiness=true. Elapsed: 8.01997531s
Jun  1 15:33:46.103: INFO: Pod "pod-subpath-test-projected-766r": Phase="Running", Reason="", readiness=true. Elapsed: 10.023173549s
... skipping 2 lines ...
Jun  1 15:33:52.117: INFO: Pod "pod-subpath-test-projected-766r": Phase="Running", Reason="", readiness=true. Elapsed: 16.037180887s
Jun  1 15:33:54.121: INFO: Pod "pod-subpath-test-projected-766r": Phase="Running", Reason="", readiness=true. Elapsed: 18.041509984s
Jun  1 15:33:56.126: INFO: Pod "pod-subpath-test-projected-766r": Phase="Running", Reason="", readiness=true. Elapsed: 20.046425795s
Jun  1 15:33:58.131: INFO: Pod "pod-subpath-test-projected-766r": Phase="Running", Reason="", readiness=true. Elapsed: 22.051188769s
Jun  1 15:34:00.136: INFO: Pod "pod-subpath-test-projected-766r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.056213849s
STEP: Saw pod success
Jun  1 15:34:00.136: INFO: Pod "pod-subpath-test-projected-766r" satisfied condition "Succeeded or Failed"
Jun  1 15:34:00.139: INFO: Trying to get logs from node kind-worker pod pod-subpath-test-projected-766r container test-container-subpath-projected-766r: <nil>
STEP: delete the pod
Jun  1 15:34:00.158: INFO: Waiting for pod pod-subpath-test-projected-766r to disappear
Jun  1 15:34:00.162: INFO: Pod pod-subpath-test-projected-766r no longer exists
STEP: Deleting pod pod-subpath-test-projected-766r
Jun  1 15:34:00.162: INFO: Deleting pod "pod-subpath-test-projected-766r" in namespace "subpath-2507"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Jun  1 15:34:00.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2507" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":292,"completed":146,"skipped":2368,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 25 lines ...
  test/e2e/framework/framework.go:175
Jun  1 15:34:06.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3423" for this suite.
STEP: Destroying namespace "webhook-3423-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":292,"completed":147,"skipped":2375,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Jun  1 15:34:10.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8916" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":148,"skipped":2395,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-4aaa4671-90e0-4caa-8971-57e793b51d10
STEP: Creating a pod to test consume configMaps
Jun  1 15:34:10.554: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-27260a24-a7a2-437a-be09-f987e9b3fc43" in namespace "projected-5671" to be "Succeeded or Failed"
Jun  1 15:34:10.557: INFO: Pod "pod-projected-configmaps-27260a24-a7a2-437a-be09-f987e9b3fc43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.931828ms
Jun  1 15:34:12.562: INFO: Pod "pod-projected-configmaps-27260a24-a7a2-437a-be09-f987e9b3fc43": Phase="Running", Reason="", readiness=true. Elapsed: 2.008076232s
Jun  1 15:34:14.567: INFO: Pod "pod-projected-configmaps-27260a24-a7a2-437a-be09-f987e9b3fc43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012519164s
STEP: Saw pod success
Jun  1 15:34:14.567: INFO: Pod "pod-projected-configmaps-27260a24-a7a2-437a-be09-f987e9b3fc43" satisfied condition "Succeeded or Failed"
Jun  1 15:34:14.570: INFO: Trying to get logs from node kind-worker pod pod-projected-configmaps-27260a24-a7a2-437a-be09-f987e9b3fc43 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 15:34:14.587: INFO: Waiting for pod pod-projected-configmaps-27260a24-a7a2-437a-be09-f987e9b3fc43 to disappear
Jun  1 15:34:14.591: INFO: Pod pod-projected-configmaps-27260a24-a7a2-437a-be09-f987e9b3fc43 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Jun  1 15:34:14.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5671" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":149,"skipped":2401,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 20 lines ...
  test/e2e/framework/framework.go:175
Jun  1 15:34:18.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2354" for this suite.
STEP: Destroying namespace "webhook-2354-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":292,"completed":150,"skipped":2404,"failed":0}
SS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 15 lines ...
  test/e2e/framework/framework.go:175
Jun  1 15:34:24.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-7272" for this suite.
STEP: Destroying namespace "nsdeletetest-7734" for this suite.
Jun  1 15:34:24.764: INFO: Namespace nsdeletetest-7734 was already deleted
STEP: Destroying namespace "nsdeletetest-8476" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":292,"completed":151,"skipped":2406,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 27 lines ...
  test/e2e/framework/framework.go:175
Jun  1 15:34:31.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8817" for this suite.
STEP: Destroying namespace "webhook-8817-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":292,"completed":152,"skipped":2414,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Jun  1 15:34:31.830: INFO: Asynchronously running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl kubectl --server=https://127.0.0.1:38093 --kubeconfig=/root/.kube/kind-test-config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Jun  1 15:34:32.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6443" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":292,"completed":153,"skipped":2431,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-e42304e1-b075-40d4-8d6d-6b954dfb2817
STEP: Creating a pod to test consume configMaps
Jun  1 15:34:32.104: INFO: Waiting up to 5m0s for pod "pod-configmaps-ddf51bd5-9114-4f75-a4bf-331a364889a5" in namespace "configmap-5179" to be "Succeeded or Failed"
Jun  1 15:34:32.108: INFO: Pod "pod-configmaps-ddf51bd5-9114-4f75-a4bf-331a364889a5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.767759ms
Jun  1 15:34:34.114: INFO: Pod "pod-configmaps-ddf51bd5-9114-4f75-a4bf-331a364889a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009670153s
Jun  1 15:34:36.118: INFO: Pod "pod-configmaps-ddf51bd5-9114-4f75-a4bf-331a364889a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013998279s
STEP: Saw pod success
Jun  1 15:34:36.118: INFO: Pod "pod-configmaps-ddf51bd5-9114-4f75-a4bf-331a364889a5" satisfied condition "Succeeded or Failed"
Jun  1 15:34:36.122: INFO: Trying to get logs from node kind-worker pod pod-configmaps-ddf51bd5-9114-4f75-a4bf-331a364889a5 container configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 15:34:36.139: INFO: Waiting for pod pod-configmaps-ddf51bd5-9114-4f75-a4bf-331a364889a5 to disappear
Jun  1 15:34:36.143: INFO: Pod pod-configmaps-ddf51bd5-9114-4f75-a4bf-331a364889a5 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 15:34:36.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5179" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":292,"completed":154,"skipped":2469,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Jun  1 15:34:36.152: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Jun  1 15:34:36.190: INFO: Waiting up to 5m0s for pod "downward-api-4cc3bda4-aa0c-4509-b774-1b16457ba694" in namespace "downward-api-7681" to be "Succeeded or Failed"
Jun  1 15:34:36.193: INFO: Pod "downward-api-4cc3bda4-aa0c-4509-b774-1b16457ba694": Phase="Pending", Reason="", readiness=false. Elapsed: 3.174691ms
Jun  1 15:34:38.197: INFO: Pod "downward-api-4cc3bda4-aa0c-4509-b774-1b16457ba694": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007026399s
Jun  1 15:34:40.201: INFO: Pod "downward-api-4cc3bda4-aa0c-4509-b774-1b16457ba694": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010972197s
STEP: Saw pod success
Jun  1 15:34:40.201: INFO: Pod "downward-api-4cc3bda4-aa0c-4509-b774-1b16457ba694" satisfied condition "Succeeded or Failed"
Jun  1 15:34:40.204: INFO: Trying to get logs from node kind-worker pod downward-api-4cc3bda4-aa0c-4509-b774-1b16457ba694 container dapi-container: <nil>
STEP: delete the pod
Jun  1 15:34:40.219: INFO: Waiting for pod downward-api-4cc3bda4-aa0c-4509-b774-1b16457ba694 to disappear
Jun  1 15:34:40.223: INFO: Pod downward-api-4cc3bda4-aa0c-4509-b774-1b16457ba694 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Jun  1 15:34:40.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7681" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":292,"completed":155,"skipped":2485,"failed":0}
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-d5c1a4d1-4262-4940-9dc2-f8d80d457e97
STEP: Creating a pod to test consume configMaps
Jun  1 15:34:40.311: INFO: Waiting up to 5m0s for pod "pod-configmaps-8d42a68e-d893-4553-b8ab-240f854865c7" in namespace "configmap-5687" to be "Succeeded or Failed"
Jun  1 15:34:40.313: INFO: Pod "pod-configmaps-8d42a68e-d893-4553-b8ab-240f854865c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.776499ms
Jun  1 15:34:42.317: INFO: Pod "pod-configmaps-8d42a68e-d893-4553-b8ab-240f854865c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006686691s
Jun  1 15:34:44.320: INFO: Pod "pod-configmaps-8d42a68e-d893-4553-b8ab-240f854865c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009343463s
STEP: Saw pod success
Jun  1 15:34:44.320: INFO: Pod "pod-configmaps-8d42a68e-d893-4553-b8ab-240f854865c7" satisfied condition "Succeeded or Failed"
Jun  1 15:34:44.323: INFO: Trying to get logs from node kind-worker pod pod-configmaps-8d42a68e-d893-4553-b8ab-240f854865c7 container configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 15:34:44.337: INFO: Waiting for pod pod-configmaps-8d42a68e-d893-4553-b8ab-240f854865c7 to disappear
Jun  1 15:34:44.339: INFO: Pod pod-configmaps-8d42a68e-d893-4553-b8ab-240f854865c7 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 15:34:44.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5687" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":292,"completed":156,"skipped":2487,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
Jun  1 15:34:48.443: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:34:48.446: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:34:48.455: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:34:48.458: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:34:48.461: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:34:48.464: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:34:48.471: INFO: Lookups using dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9436.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9436.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local jessie_udp@dns-test-service-2.dns-9436.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9436.svc.cluster.local]

Jun  1 15:34:53.476: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:34:53.479: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:34:53.483: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:34:53.487: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:34:53.496: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:34:53.499: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:34:53.502: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:34:53.506: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:34:53.511: INFO: Lookups using dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9436.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9436.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local jessie_udp@dns-test-service-2.dns-9436.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9436.svc.cluster.local]

Jun  1 15:34:58.476: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:34:58.480: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:34:58.483: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:34:58.487: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:34:58.497: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:34:58.501: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:34:58.504: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:34:58.507: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:34:58.513: INFO: Lookups using dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9436.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9436.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local jessie_udp@dns-test-service-2.dns-9436.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9436.svc.cluster.local]

Jun  1 15:35:03.475: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:35:03.479: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:35:03.482: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:35:03.487: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:35:03.497: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:35:03.500: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:35:03.503: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:35:03.506: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:35:03.513: INFO: Lookups using dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9436.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9436.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local jessie_udp@dns-test-service-2.dns-9436.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9436.svc.cluster.local]

Jun  1 15:35:08.479: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:35:08.484: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:35:08.488: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:35:08.492: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:35:08.502: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:35:08.505: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:35:08.508: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:35:08.511: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:35:08.518: INFO: Lookups using dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9436.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9436.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local jessie_udp@dns-test-service-2.dns-9436.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9436.svc.cluster.local]

Jun  1 15:35:13.478: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:35:13.482: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:35:13.486: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:35:13.490: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:35:13.502: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:35:13.506: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:35:13.509: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:35:13.512: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9436.svc.cluster.local from pod dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296: the server could not find the requested resource (get pods dns-test-afa871aa-ef79-4164-893d-65bf17b06296)
Jun  1 15:35:13.519: INFO: Lookups using dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9436.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9436.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9436.svc.cluster.local jessie_udp@dns-test-service-2.dns-9436.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9436.svc.cluster.local]

Jun  1 15:35:18.509: INFO: DNS probes using dns-9436/dns-test-afa871aa-ef79-4164-893d-65bf17b06296 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Jun  1 15:35:18.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9436" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":292,"completed":157,"skipped":2511,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jun  1 15:35:18.622: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5c22a198-8661-4564-9a39-893a2dd84dac" in namespace "downward-api-4368" to be "Succeeded or Failed"
Jun  1 15:35:18.625: INFO: Pod "downwardapi-volume-5c22a198-8661-4564-9a39-893a2dd84dac": Phase="Pending", Reason="", readiness=false. Elapsed: 3.551427ms
Jun  1 15:35:20.629: INFO: Pod "downwardapi-volume-5c22a198-8661-4564-9a39-893a2dd84dac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007339486s
Jun  1 15:35:22.632: INFO: Pod "downwardapi-volume-5c22a198-8661-4564-9a39-893a2dd84dac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009991425s
STEP: Saw pod success
Jun  1 15:35:22.632: INFO: Pod "downwardapi-volume-5c22a198-8661-4564-9a39-893a2dd84dac" satisfied condition "Succeeded or Failed"
Jun  1 15:35:22.636: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-5c22a198-8661-4564-9a39-893a2dd84dac container client-container: <nil>
STEP: delete the pod
Jun  1 15:35:22.654: INFO: Waiting for pod downwardapi-volume-5c22a198-8661-4564-9a39-893a2dd84dac to disappear
Jun  1 15:35:22.656: INFO: Pod downwardapi-volume-5c22a198-8661-4564-9a39-893a2dd84dac no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Jun  1 15:35:22.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4368" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":292,"completed":158,"skipped":2525,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
Jun  1 15:35:26.706: INFO: Initial restart count of pod liveness-bb658c46-a114-45f1-b61a-b75a12afd079 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Jun  1 15:39:27.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2719" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":292,"completed":159,"skipped":2537,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 9 lines ...
STEP: creating pod
Jun  1 15:39:31.292: INFO: Pod pod-hostip-9e4446ce-242e-4912-a855-524b8f9f46d3 has hostIP: 172.18.0.3
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Jun  1 15:39:31.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4552" for this suite.
•{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":292,"completed":160,"skipped":2548,"failed":0}
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 17 lines ...
Jun  1 15:39:41.393: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jun  1 15:39:41.396: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
Jun  1 15:39:41.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1708" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":292,"completed":161,"skipped":2550,"failed":0}
SSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap configmap-7932/configmap-test-7a9ac289-f4d4-460d-b005-1caec66c4da3
STEP: Creating a pod to test consume configMaps
Jun  1 15:39:41.439: INFO: Waiting up to 5m0s for pod "pod-configmaps-8d86589a-df6e-4a3f-bf34-45c227b11e7d" in namespace "configmap-7932" to be "Succeeded or Failed"
Jun  1 15:39:41.442: INFO: Pod "pod-configmaps-8d86589a-df6e-4a3f-bf34-45c227b11e7d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.226629ms
Jun  1 15:39:43.446: INFO: Pod "pod-configmaps-8d86589a-df6e-4a3f-bf34-45c227b11e7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006829141s
STEP: Saw pod success
Jun  1 15:39:43.446: INFO: Pod "pod-configmaps-8d86589a-df6e-4a3f-bf34-45c227b11e7d" satisfied condition "Succeeded or Failed"
Jun  1 15:39:43.448: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-8d86589a-df6e-4a3f-bf34-45c227b11e7d container env-test: <nil>
STEP: delete the pod
Jun  1 15:39:43.473: INFO: Waiting for pod pod-configmaps-8d86589a-df6e-4a3f-bf34-45c227b11e7d to disappear
Jun  1 15:39:43.476: INFO: Pod pod-configmaps-8d86589a-df6e-4a3f-bf34-45c227b11e7d no longer exists
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 15:39:43.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7932" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":292,"completed":162,"skipped":2556,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 8 lines ...
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  test/e2e/framework/framework.go:175
Jun  1 15:39:45.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-585" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":292,"completed":163,"skipped":2582,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-5620ef63-1d25-4bbd-8a9f-dec516192272
STEP: Creating a pod to test consume configMaps
Jun  1 15:39:45.596: INFO: Waiting up to 5m0s for pod "pod-configmaps-5071127e-e45f-47b5-bb02-27cf671625f4" in namespace "configmap-7662" to be "Succeeded or Failed"
Jun  1 15:39:45.599: INFO: Pod "pod-configmaps-5071127e-e45f-47b5-bb02-27cf671625f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.820814ms
Jun  1 15:39:47.603: INFO: Pod "pod-configmaps-5071127e-e45f-47b5-bb02-27cf671625f4": Phase="Running", Reason="", readiness=true. Elapsed: 2.006887337s
Jun  1 15:39:49.606: INFO: Pod "pod-configmaps-5071127e-e45f-47b5-bb02-27cf671625f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010552663s
STEP: Saw pod success
Jun  1 15:39:49.607: INFO: Pod "pod-configmaps-5071127e-e45f-47b5-bb02-27cf671625f4" satisfied condition "Succeeded or Failed"
Jun  1 15:39:49.610: INFO: Trying to get logs from node kind-worker pod pod-configmaps-5071127e-e45f-47b5-bb02-27cf671625f4 container configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 15:39:49.636: INFO: Waiting for pod pod-configmaps-5071127e-e45f-47b5-bb02-27cf671625f4 to disappear
Jun  1 15:39:49.639: INFO: Pod pod-configmaps-5071127e-e45f-47b5-bb02-27cf671625f4 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Jun  1 15:39:49.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7662" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":292,"completed":164,"skipped":2597,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-4e2dcf86-a800-423e-b083-997fa9729710
STEP: Creating a pod to test consume configMaps
Jun  1 15:39:49.714: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ac249013-cceb-45f5-b550-9d9d4cda7452" in namespace "projected-2355" to be "Succeeded or Failed"
Jun  1 15:39:49.717: INFO: Pod "pod-projected-configmaps-ac249013-cceb-45f5-b550-9d9d4cda7452": Phase="Pending", Reason="", readiness=false. Elapsed: 2.940151ms
Jun  1 15:39:51.720: INFO: Pod "pod-projected-configmaps-ac249013-cceb-45f5-b550-9d9d4cda7452": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005622738s
Jun  1 15:39:53.724: INFO: Pod "pod-projected-configmaps-ac249013-cceb-45f5-b550-9d9d4cda7452": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009517255s
STEP: Saw pod success
Jun  1 15:39:53.724: INFO: Pod "pod-projected-configmaps-ac249013-cceb-45f5-b550-9d9d4cda7452" satisfied condition "Succeeded or Failed"
Jun  1 15:39:53.726: INFO: Trying to get logs from node kind-worker pod pod-projected-configmaps-ac249013-cceb-45f5-b550-9d9d4cda7452 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Jun  1 15:39:53.742: INFO: Waiting for pod pod-projected-configmaps-ac249013-cceb-45f5-b550-9d9d4cda7452 to disappear
Jun  1 15:39:53.746: INFO: Pod pod-projected-configmaps-ac249013-cceb-45f5-b550-9d9d4cda7452 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Jun  1 15:39:53.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2355" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":292,"completed":165,"skipped":2637,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 13 lines ...
Jun  1 15:39:54.025: INFO: stderr: ""
Jun  1 15:39:54.025: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  test/e2e/kubectl/kubectl.go:1533
Jun  1 15:39:54.028: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:38093 --kubeconfig=/root/.kube/kind-test-config delete pods e2e-test-httpd-pod --namespace=kubectl-9744'
{"component":"entrypoint","file":"prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","time":"2020-06-01T16:35:04Z"}
{"component":"entrypoint","file":"prow/entrypoint/run.go:245","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15s grace period","time":"2020-06-01T16:35:19Z"}